Skip to main content

Active fork of baiji-pod, Body Labs' asset cache for S3 using baiji

Project description

pip install version python versions build status last commit open pull requests

This is an active fork of baiji-pod, Body Labs’ asset cache for S3 using baiji.

The fork’s goals are modest:

  • Keep the library working in current versions of Python and other tools.

  • Make bug fixes.

  • Provide API stability and backward compatibility with the upstream version.

  • Respond to community contributions.

It’s used by related forks such as lace.

Installation

Install the fork:

pip install metabaiji-pod

And import it just like the upstream library:

from baiji.pod import AssetCache
from baiji.pod import Config
from baiji.pod import VersionedCache

Overview

Versioned-tracked assets and a low-level asset cache for Amazon S3, using baiji.

Features

  • Versioned cache for version-tracked assets

    • Creates a new file each time it changes

    • Using a checked-in manifest, each revision of the code is pinned to a given version of the file

    • Convenient CLI for pushing updates

  • Low-level asset cache, for any S3 path

    • Assets are stored locally, and revalidated after a timeout

  • Prefill tool populates the caches with a list of needed assets

  • Supports Python 2.7

  • Supports OS X, Linux, and Windows

    • A few dev features only work on OS X

  • Tested and production-hardened

The versioned cache

The versioned cache provides access to a repository of files. The changes to those files are tracked and identified with to a semver-like version number.

To use the versioned cache, you need a copy of a manifest file, which lists all the versioned paths and the latest version of each one. When you request a file from the cache, it consults this manifest file to determine the correct version. The versioned cache delegates loading to the underlying asset cache.

The versioned cache was designed for compute assets: chunks of data which are used in code. When the manifest is checked in with the code, it pins the version of each asset. If the asset is subsequently updated, that revision of the code will continue to get the version it’s expecting.

The bucket containing the versioned assets is intended to be immutable. Nothing there should ever be changed or deleted. Only new versions added.

The manifest looks like this:

{
    "/foo/bar.csv": "1.2.5",
    "/foo/bar.json": "0.1.6"
}

To load a versioned asset:

import json
from baiji.pod import AssetCache
from baiji.pod import Config
from baiji.pod import VersionedCache

config = Config()
# Improve performance by assuming the bucket is immutable.
config.IMMUTABLE_BUCKETS = ['my-versioned-assets']

vc = VersionedCache(
    cache=AssetCache(config),
    manifest_path='versioned_assets.json',
    bucket='my-versioned-assets')

with open(vc('/foo/bar.json'), 'r') as f:
    data = json.load(f)

Or, with `baiji-serialization <https://github.com/bodylabs/baiji-serialization>`__:

from baiji.serialization import json
data = json.load(vc('s3://example-bucket/example.json'))

To add a new versioned path, or update an existing one, use the vc command-line tool:

vc add /foo/bar.csv ~/Desktop/bar.csv
vc update --major /foo/bar.csv ~/Desktop/new_bar.csv
vc update --minor /foo/bar.csv ~/Desktop/new_bar.csv
vc update --patch /foo/bar.csv ~/Desktop/new_bar.csv

A VersionedCache object is specific to a manifest file and a bucket.

Though the version number uses semver-like semantics, the cache ignores version semantics. The manifest pins an exact version number.

The asset cache

The asset cache works at a lower level of abstraction. It holds local copies of arbitrary S3 assets. Calling the cache() function with an S3 path ensures that the file is available locally, and then returns a valid, local path.

On a cache miss, the file is downloaded to the cache and then its local path is returned. Subsequent calls will return the same local path. After a timeout, which defaults to one day, the validity of the local file is checked by comparing a local MD5 hash with the remote etag. This check is repeated once per day.

To gain a performance boost, you can configure immutable buckets, whose contents are never revalidated after download. The versioned cache uses this feature.

import json
from baiji.pod import AssetCache

cache = AssetCache.create_default()

with open(cache('s3://example-bucket/example.json'), 'r') as f:
    data = json.load(f)

Or, with `baiji-serialization <https://github.com/bodylabs/baiji-serialization>`__:

from baiji.serialization import json
data = json.load(cache('s3://example-bucket/example.json'))

It is safe to call cache multiple times: cache(cache('path')) will behave correctly.

Tips

When you’re developing, you often want to try out variations on a file before committing to a particular one. Rather than incrementing the patch level over and over, you can set manifest.json to include an absolute path:

"/foo/bar.csv": "/Users/me/Desktop/foo.obj",

This can be either a local or an s3 path; use local if you’re iterating by yourself, and s3 to iterate with other developers or in CI.

Development

pip install -r requirements_dev.txt
rake unittest
rake lint

TODO

  • Add vc config to config

    • Explain or clean up the weird default_bucket config logic in prefill_runner. e.g. This logic is so that we can have a customized script in core that doesn’t require these arguments.

  • Use config without subclassing. Pass overries to init

  • Configure using an importable config path instead of injecting. Or, possibly, allow ~/.aws/baiji_config to change defaults.

  • Rework baiji.pod.util.reachability and perhaps baiji.util.reachability as well.

  • Restore CDN publish functionality in core

  • Avoid using actual versioned assets. Perhaps write some (smaller!) files to a test bucket and use those?

  • Remove suffixes support in vc.uri, used only for CDNPublisher

  • Move yaml.dump and json.* to baiji. Possibly do a try: from baiji.serialization.json import load, dump; except ImportError: def load(... Or at least have a comment to the effect of “don’t use this, use baiji.serialization.json”

  • Use consistent argparse pattern in the runners.

  • I think it would be better if the CacheFile didn’t need to know about the AssetCache, to avoid this bi-directional dependency. It’s only required in the constructor, but that could live on the AssetCache, e.g. create_cache_file(path, bucket=None).

Contribute

Pull requests welcome!

Support

If you are having issues, please let us know.

Acknowledgements

baiji-pod was developed at Body Labs, primarily by Alex Weiss and Paul Melnikow.

License

The project is licensed under the Apache license, version 2.0.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

metabaiji-pod-1.1.1.tar.gz (24.5 kB view details)

Uploaded Source

File details

Details for the file metabaiji-pod-1.1.1.tar.gz.

File metadata

  • Download URL: metabaiji-pod-1.1.1.tar.gz
  • Upload date:
  • Size: 24.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.11.0 pkginfo/1.4.2 requests/2.9.1 setuptools/40.8.0 requests-toolbelt/0.8.0 tqdm/4.25.0 CPython/2.7.16

File hashes

Hashes for metabaiji-pod-1.1.1.tar.gz
Algorithm Hash digest
SHA256 5758345d5e817c71db26e2e394b2e3b8f10feb457c9b0cf69cba9d478109bb7a
MD5 7f7faf83b736033ef68fe96750b12c79
BLAKE2b-256 09b9378eb55b06b7fdcff5e91fdc2f973382d39587cae63412975f12480a6e55

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page