Generate ES Indexes, load and extract data, based on JSON Table Schema descriptors.
Project description
tableschema-elasticsearch-py
============================
| |Travis|
| |Coveralls|
| |PyPi|
| |Gitter|
Generate and load ElasticSearch indexes based on `Table
Schema <http://specs.frictionlessdata.io/table-schema/>`__ descriptors.
Features
--------
- implements ``tableschema.Storage`` interface
Getting Started
---------------
Installation
~~~~~~~~~~~~
The package use semantic versioning. It means that major versions could
include breaking changes. It's highly recommended to specify ``package``
version range in your ``setup/requirements`` file e.g.
``package>=1.0,<2.0``.
.. code:: bash
pip install tableschema-elasticsearch
Examples
~~~~~~~~
Code examples in this readme requires Python 3.3+ interpreter. You could
see even more example in
`examples <https://github.com/frictionlessdata/tableschema-spss-py/tree/master/examples>`__
directory.
.. code:: python
import elasticsearch
import jsontableschema_es
INDEX_NAME = 'testing_index'
# Connect to Elasticsearch instance running on localhost
es=elasticsearch.Elasticsearch()
storage=jsontableschema_es.Storage(es)
# List all indexes
print(list(storage.buckets))
# Create a new index
storage.create('test', [
('numbers',
{
'fields': [
{
'name': 'num',
'type': 'number'
}
]
})
])
# Write data to index
l=list(storage.write(INDEX_NAME, 'numbers', ({'num':i} for i in range(1000)), ['num']))
print(len(l))
print(l[:10], '...')
l=list(storage.write(INDEX_NAME, 'numbers', ({'num':i} for i in range(500,1500)), ['num']))
print(len(l))
print(l[:10], '...')
# Read all data from index
storage=jsontableschema_es.Storage(es)
print(list(storage.buckets))
l=list(storage.read(INDEX_NAME))
print(len(l))
print(l[:10])
Documentation
-------------
The whole public API of this package is described here and follows
semantic versioning rules. Everyting outside of this readme are private
API and could be changed without any notification on any new version.
Storage
~~~~~~~
Package implements `Tabular
Storage <https://github.com/frictionlessdata/tableschema-py#storage>`__
interface (see full documentation on the link):
|Storage|
This driver provides an additional API:
``Storage(es=None)``
^^^^^^^^^^^^^^^^^^^^
- ``es (object)`` - ``elasticsearch.Elastisearc`` instance. If not
provided new one will be created.
In this driver ``elasticsearch`` is used as the db wrapper. We can get
storage this way:
.. code:: python
from elasticsearch import Elasticsearch
from jsontableschema_sql import Storage
engine = Elasticsearch()
storage = Storage(engine)
Then we could interact with storage ('buckets' are ElasticSearch indexes
in this context):
.. code:: python
storage.buckets # iterator over bucket names
storage.create('bucket', [(doc_type, descriptor)],
reindex=False,
always_recreate=False,
mapping_generator_cls=None)
# reindex will copy existing documents from an existing index with the same name (in case of a mapping conflict)
# always_recreate will always recreate an index, even if it already exists. default is to update mappings only.
# mapping_generator_cls allows customization of the generated mapping
storage.delete('bucket')
storage.describe('bucket') # return descriptor, not implemented yet
storage.iter('bucket', doc_type=optional) # yield rows
storage.read('bucket', doc_type=optional) # return rows
storage.write('bucket', doc_type, rows, primary_key,
as_generator=False)
# primary_key is a list of field names which will be used to generate document ids
When creating indexes, we always create an index with a semi-random name
and a matching alias that points to it. This allows us to decide whether
to re-index documents whenever we're re-creating an index, or to discard
the existing records.
Mappings
~~~~~~~~
When creating indexes, the tableschema types are converted to ES types
and a mapping is generated for the index.
Some special properties in the schema provide extra information for
generating the mapping:
- ``array`` types need also to have the ``es:itemType`` property which
specifies the inner data type of array items.
- ``object`` types need also to have the ``es:schema`` property which
provides a tableschema for the inner document contained in that
object (or have ``es:enabled=false`` to disable indexing of that
field).
Example:
.. code:: json
{
"fields": [
{
"name": "my-number",
"type": "number"
},
{
"name": "my-array-of-dates",
"type": "array",
"es:itemType": "date"
},
{
"name": "my-person-object",
"type": "object",
"es:schema": {
"fields": [
{"name": "name", "type": "string"},
{"name": "surname", "type": "string"},
{"name": "age", "type": "integer"},
{"name": "date-of-birth", "type": "date", "format": "%Y-%m-%d"}
]
}
},
{
"name": "my-library",
"type": "array",
"es:itemType": "object",
"es:schema": {
"fields": [
{"name": "title", "type": "string"},
{"name": "isbn", "type": "string"},
{"name": "num-of-pages", "type": "integer"}
]
}
},
{
"name": "my-user-provded-object",
"type": "object",
"es:enabled": false
}
]
}
Custom mappings
^^^^^^^^^^^^^^^
By providing a custom mapping generator class (via
``mapping_generator_cls``), inheriting from the MappingGenerator class
you should be able
Contributing
------------
The project follows the `Open Knowledge International coding
standards <https://github.com/okfn/coding-standards>`__.
| Recommended way to get started is to create and activate a project
virtual environment.
| To install package and development dependencies into active
environment:
::
$ make install
To run tests with linting and coverage:
.. code:: bash
$ make test
| For linting ``pylama`` configured in ``pylama.ini`` is used. On this
stage it's already
| installed into your environment and could be used separately with more
fine-grained control
| as described in documentation -
https://pylama.readthedocs.io/en/latest/.
For example to sort results by error type:
.. code:: bash
$ pylama --sort <path>
| For testing ``tox`` configured in ``tox.ini`` is used.
| It's already installed into your environment and could be used
separately with more fine-grained control as described in documentation
- https://testrun.org/tox/latest/.
| For example to check subset of tests against Python 2 environment with
increased verbosity.
| All positional arguments and options after ``--`` will be passed to
``py.test``:
.. code:: bash
tox -e py27 -- -v tests/<path>
| Under the hood ``tox`` uses ``pytest`` configured in ``pytest.ini``,
``coverage``
| and ``mock`` packages. This packages are available only in tox
envionments.
Changelog
---------
Here described only breaking and the most important changes. The full
changelog and documentation for all released versions could be found in
nicely formatted `commit
history <https://github.com/frictionlessdata/tableschema-elasticsearch-py/commits/master>`__.
v0.x
~~~~
Initial driver implementation.
.. |Travis| image:: https://img.shields.io/travis/frictionlessdata/tableschema-elasticsearch-py/master.svg
:target: https://travis-ci.org/frictionlessdata/tableschema-elasticsearch-py
.. |Coveralls| image:: http://img.shields.io/coveralls/frictionlessdata/tableschema-elasticsearch-py/master.svg
:target: https://coveralls.io/r/frictionlessdata/tableschema-elasticsearch-py?branch=master
.. |PyPi| image:: https://img.shields.io/pypi/v/tableschema-elasticsearch.svg
:target: https://pypi-hypernode.com/pypi/tableschema-elasticsearch
.. |Gitter| image:: https://img.shields.io/gitter/room/frictionlessdata/chat.svg
:target: https://gitter.im/frictionlessdata/chat
.. |Storage| image:: https://i.imgur.com/RQgrxqp.png
============================
| |Travis|
| |Coveralls|
| |PyPi|
| |Gitter|
Generate and load ElasticSearch indexes based on `Table
Schema <http://specs.frictionlessdata.io/table-schema/>`__ descriptors.
Features
--------
- implements ``tableschema.Storage`` interface
Getting Started
---------------
Installation
~~~~~~~~~~~~
The package use semantic versioning. It means that major versions could
include breaking changes. It's highly recommended to specify ``package``
version range in your ``setup/requirements`` file e.g.
``package>=1.0,<2.0``.
.. code:: bash
pip install tableschema-elasticsearch
Examples
~~~~~~~~
Code examples in this readme requires Python 3.3+ interpreter. You could
see even more example in
`examples <https://github.com/frictionlessdata/tableschema-spss-py/tree/master/examples>`__
directory.
.. code:: python
import elasticsearch
import jsontableschema_es
INDEX_NAME = 'testing_index'
# Connect to Elasticsearch instance running on localhost
es=elasticsearch.Elasticsearch()
storage=jsontableschema_es.Storage(es)
# List all indexes
print(list(storage.buckets))
# Create a new index
storage.create('test', [
('numbers',
{
'fields': [
{
'name': 'num',
'type': 'number'
}
]
})
])
# Write data to index
l=list(storage.write(INDEX_NAME, 'numbers', ({'num':i} for i in range(1000)), ['num']))
print(len(l))
print(l[:10], '...')
l=list(storage.write(INDEX_NAME, 'numbers', ({'num':i} for i in range(500,1500)), ['num']))
print(len(l))
print(l[:10], '...')
# Read all data from index
storage=jsontableschema_es.Storage(es)
print(list(storage.buckets))
l=list(storage.read(INDEX_NAME))
print(len(l))
print(l[:10])
Documentation
-------------
The whole public API of this package is described here and follows
semantic versioning rules. Everyting outside of this readme are private
API and could be changed without any notification on any new version.
Storage
~~~~~~~
Package implements `Tabular
Storage <https://github.com/frictionlessdata/tableschema-py#storage>`__
interface (see full documentation on the link):
|Storage|
This driver provides an additional API:
``Storage(es=None)``
^^^^^^^^^^^^^^^^^^^^
- ``es (object)`` - ``elasticsearch.Elastisearc`` instance. If not
provided new one will be created.
In this driver ``elasticsearch`` is used as the db wrapper. We can get
storage this way:
.. code:: python
from elasticsearch import Elasticsearch
from jsontableschema_sql import Storage
engine = Elasticsearch()
storage = Storage(engine)
Then we could interact with storage ('buckets' are ElasticSearch indexes
in this context):
.. code:: python
storage.buckets # iterator over bucket names
storage.create('bucket', [(doc_type, descriptor)],
reindex=False,
always_recreate=False,
mapping_generator_cls=None)
# reindex will copy existing documents from an existing index with the same name (in case of a mapping conflict)
# always_recreate will always recreate an index, even if it already exists. default is to update mappings only.
# mapping_generator_cls allows customization of the generated mapping
storage.delete('bucket')
storage.describe('bucket') # return descriptor, not implemented yet
storage.iter('bucket', doc_type=optional) # yield rows
storage.read('bucket', doc_type=optional) # return rows
storage.write('bucket', doc_type, rows, primary_key,
as_generator=False)
# primary_key is a list of field names which will be used to generate document ids
When creating indexes, we always create an index with a semi-random name
and a matching alias that points to it. This allows us to decide whether
to re-index documents whenever we're re-creating an index, or to discard
the existing records.
Mappings
~~~~~~~~
When creating indexes, the tableschema types are converted to ES types
and a mapping is generated for the index.
Some special properties in the schema provide extra information for
generating the mapping:
- ``array`` types need also to have the ``es:itemType`` property which
specifies the inner data type of array items.
- ``object`` types need also to have the ``es:schema`` property which
provides a tableschema for the inner document contained in that
object (or have ``es:enabled=false`` to disable indexing of that
field).
Example:
.. code:: json
{
"fields": [
{
"name": "my-number",
"type": "number"
},
{
"name": "my-array-of-dates",
"type": "array",
"es:itemType": "date"
},
{
"name": "my-person-object",
"type": "object",
"es:schema": {
"fields": [
{"name": "name", "type": "string"},
{"name": "surname", "type": "string"},
{"name": "age", "type": "integer"},
{"name": "date-of-birth", "type": "date", "format": "%Y-%m-%d"}
]
}
},
{
"name": "my-library",
"type": "array",
"es:itemType": "object",
"es:schema": {
"fields": [
{"name": "title", "type": "string"},
{"name": "isbn", "type": "string"},
{"name": "num-of-pages", "type": "integer"}
]
}
},
{
"name": "my-user-provded-object",
"type": "object",
"es:enabled": false
}
]
}
Custom mappings
^^^^^^^^^^^^^^^
By providing a custom mapping generator class (via
``mapping_generator_cls``), inheriting from the MappingGenerator class
you should be able
Contributing
------------
The project follows the `Open Knowledge International coding
standards <https://github.com/okfn/coding-standards>`__.
| Recommended way to get started is to create and activate a project
virtual environment.
| To install package and development dependencies into active
environment:
::
$ make install
To run tests with linting and coverage:
.. code:: bash
$ make test
| For linting ``pylama`` configured in ``pylama.ini`` is used. On this
stage it's already
| installed into your environment and could be used separately with more
fine-grained control
| as described in documentation -
https://pylama.readthedocs.io/en/latest/.
For example to sort results by error type:
.. code:: bash
$ pylama --sort <path>
| For testing ``tox`` configured in ``tox.ini`` is used.
| It's already installed into your environment and could be used
separately with more fine-grained control as described in documentation
- https://testrun.org/tox/latest/.
| For example to check subset of tests against Python 2 environment with
increased verbosity.
| All positional arguments and options after ``--`` will be passed to
``py.test``:
.. code:: bash
tox -e py27 -- -v tests/<path>
| Under the hood ``tox`` uses ``pytest`` configured in ``pytest.ini``,
``coverage``
| and ``mock`` packages. This packages are available only in tox
envionments.
Changelog
---------
Here described only breaking and the most important changes. The full
changelog and documentation for all released versions could be found in
nicely formatted `commit
history <https://github.com/frictionlessdata/tableschema-elasticsearch-py/commits/master>`__.
v0.x
~~~~
Initial driver implementation.
.. |Travis| image:: https://img.shields.io/travis/frictionlessdata/tableschema-elasticsearch-py/master.svg
:target: https://travis-ci.org/frictionlessdata/tableschema-elasticsearch-py
.. |Coveralls| image:: http://img.shields.io/coveralls/frictionlessdata/tableschema-elasticsearch-py/master.svg
:target: https://coveralls.io/r/frictionlessdata/tableschema-elasticsearch-py?branch=master
.. |PyPi| image:: https://img.shields.io/pypi/v/tableschema-elasticsearch.svg
:target: https://pypi-hypernode.com/pypi/tableschema-elasticsearch
.. |Gitter| image:: https://img.shields.io/gitter/room/frictionlessdata/chat.svg
:target: https://gitter.im/frictionlessdata/chat
.. |Storage| image:: https://i.imgur.com/RQgrxqp.png
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file tableschema-elasticsearch-0.4.0.tar.gz
.
File metadata
- Download URL: tableschema-elasticsearch-0.4.0.tar.gz
- Upload date:
- Size: 11.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/1.12.1 pkginfo/1.4.2 requests/2.20.1 setuptools/40.6.2 requests-toolbelt/0.8.0 tqdm/4.28.1 CPython/3.5.6
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 20ccf91724c08cf53def015c0fbb037243a9f5d701b891db51ef2212f2ef2073 |
|
MD5 | 0d10f2122908eb5b1be06720cfc4e4b7 |
|
BLAKE2b-256 | 60a06d025a9f8642287ec076e01b7db38c579e9b416c3cdbc65e5e30de83edc4 |
Provenance
File details
Details for the file tableschema_elasticsearch-0.4.0-py2.py3-none-any.whl
.
File metadata
- Download URL: tableschema_elasticsearch-0.4.0-py2.py3-none-any.whl
- Upload date:
- Size: 9.6 kB
- Tags: Python 2, Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/1.12.1 pkginfo/1.4.2 requests/2.20.1 setuptools/40.6.2 requests-toolbelt/0.8.0 tqdm/4.28.1 CPython/3.5.6
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | fb3bb48e1c233a6d625ceee4be9d294f4705e58af717dcc4e409492a180d8c5b |
|
MD5 | fbf6ea008a43135726796e0fc569cb5e |
|
BLAKE2b-256 | fe6ab94a58a18628cf02efaa14a2e25c1d3ad76681984142e34017c73a09d64a |