Skip to main content

Fast data store for Pandas timeseries data

Project description

PyStore - Fast data store for Pandas timeseries data

Python version Travis-CI build status PyPi version PyPi status Star this repo Follow me on twitter

PyStore is a simple (yet powerful) datastore for Pandas dataframes, and while it can store any Pandas object, it was designed with storing timeseries data in mind.

It’s built on top of Pandas, Numpy, Dask, and Parquet (via Fastparquet), to provide an easy to use datastore for Python developers that can easily query millions of rows per second per client.

==> Check out this Blog post for the reasoning and philosophy behind PyStore, as well as a detailed tutorial with code examples.

==> Follow this PyStore tutorial in Jupyter notebook format.

Quickstart

Install PyStore

Install using pip:

$ pip install PyStore

Or upgrade using:

$ pip install PyStore --upgrade --no-cache-dir

INSTALLATION NOTE: If you don’t have Snappy installed (compression/decompression library), you’ll need to you’ll need to install it first.

Using PyStore

#!/usr/bin/env python
# -*- coding: utf-8 -*-

import pystore
import quandl

# Set storage path (optional, default is `~/.pystore`)
pystore.set_path('/usr/share/pystore')

# List stores
pystore.list_stores()

# Connect to datastore (create it if not exist)
store = pystore.store('mydatastore')

# List existing collections
store.list_collections()

# Access a collection (create it if not exist)
collection = store.collection('NASDAQ')

# List items in collection
collection.list_items()

# Load some data from Quandl
aapl = quandl.get("WIKI/AAPL", authtoken="your token here")

# Store the first 100 rows of the data in the collection under "AAPL"
collection.write('AAPL', aapl[:100], metadata={'source': 'Quandl'})

# Reading the item's data
item = collection.item('AAPL')
data = item.data  # <-- Dask dataframe (see dask.pydata.org)
metadata = item.metadata
df = item.to_pandas()

# Append the rest of the rows to the "AAPL" item
collection.append('AAPL', aapl[100:])

# Reading the item's data
item = collection.item('AAPL')
data = item.data
metadata = item.metadata
df = item.to_pandas()


# --- Query functionality ---

# Query avaialable symbols based on metadata
collection.list_items(some_key='some_value', other_key='other_value')


# --- Snapshot functionality ---

# Snapshot a collection
# (Point-in-time named reference for all current symbols in a collection)
collection.create_snapshot('snapshot_name')

# List available snapshots
collection.list_snapshots()

# Get a version of a symbol given a snapshot name
collection.item('AAPL', snapshot='snapshot_name')

# Delete a collection snapshot
collection.delete_snapshot('snapshot_name')


# ...


# Delete the item from the current version
collection.delete_item('AAPL')

# Delete the collection
store.delete_collection('NASDAQ')

Concepts

PyStore provides namespaced collections of data. These collections allow bucketing data by source, user or some other metric (for example frequency: End-Of-Day; Minute Bars; etc.). Each collection (or namespace) maps to a directory containing partitioned parquet files for each item (e.g. symbol).

A good practice it to create collections that may look something like this:

  • collection.EOD

  • collection.ONEMINUTE

Requirements

  • Python >= 3.5

  • Pandas

  • Numpy

  • Dask

  • Fastparquet

  • Snappy (Google’s compression/decompression library)

PyStore was tested to work on *NIX-like systems, including macOS.

Dependencies:

PyStore uses Snappy, a fast and efficient compression/decompression library from Google. You can install Snappy on *nix-like systems using your system’s package manager.

See the python-snappy Github repo for more information.

TL;DR;

You can install Snappy C library with following commands:

  • APT: sudo apt-get install libsnappy-dev

  • RPM: sudo yum install libsnappy-devel

  • Brew: brew install snappy

* Windows users should checkout Snappy for Windows and this Stackoverflow post for help on installing Snappy and python-snappy.

Known Limitation

PyStore currently only offers support for local filesystem. I plan on adding support for Amazon S3 (via s3fs), Google Cloud Storage (via gcsfs) and Hadoop Distributed File System (via hdfs3) in the future.

Acknowledgements

PyStore is hugely inspired by Man AHL’s Arctic which uses MongoDB for storage and allow for versioning and other features. I highly reommend you check it out.

License

PyStore is licensed under the Apache License, Version 2.0. A copy of which is included in LICENSE.txt.


I’m very interested in your experience with PyStore. Please drop me an note with any feedback you have.

Contributions welcome!

- Ran Aroussi

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

PyStore-0.1.3.tar.gz (14.6 kB view details)

Uploaded Source

File details

Details for the file PyStore-0.1.3.tar.gz.

File metadata

  • Download URL: PyStore-0.1.3.tar.gz
  • Upload date:
  • Size: 14.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.11.0 pkginfo/1.4.2 requests/2.19.1 setuptools/40.2.0 requests-toolbelt/0.8.0 tqdm/4.25.0 CPython/3.6.6

File hashes

Hashes for PyStore-0.1.3.tar.gz
Algorithm Hash digest
SHA256 9c7b8ec8bd73b6d7b5b790435780b00247dfc4ac1198351ade200535738da6a9
MD5 1611ac38686c90cd2e0a60ce781460dd
BLAKE2b-256 5eecba6ccb7a0dc65292d10ab6b9e49cd405b5e7bd11afa108616b1cc4b1823c

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page