Read GCS and local paths with the same interface, clone of tensorflow.io.gfile
Project description
blobfile
This is a standalone clone of TensorFlow's gfile
, supporting both local paths and gs://
(Google Cloud Storage) paths.
The main function is BlobFile
, a replacement for GFile
. There are also a few additional functions, basename
, dirname
, and join
, which mostly do the same thing as their os.path
namesakes, only they also support gs://
paths.
Installation:
pip install blobfile
Usage:
import blobfile as bf
with bf.BlobFile("gs://my-bucket-name/cats", "wb") as w:
w.write(b"meow!")
Here are the functions:
BlobFile
- likeopen()
but works withgs://
paths too, data can be streamed to/from the remote file. It accepts the following arguments:streaming
:- The default for
streaming
isTrue
whenmode
is in"r", "rb"
andFalse
whenmode
is in"w", "wb", "a", "ab"
. streaming=True
:- Reading is done without downloading the entire remote file.
- Writing is done to the remote file directly, but only in chunks of a few MB in size.
flush()
will not cause an early write. - Appending is not implemented.
streaming=False
:- Reading is done by downloading the remote file to a local file during the constructor.
- Writing is done by uploading the file on
close()
or during destruction. - Appending is done by downloading the file during construction and uploading on
close()
.
- The default for
buffer_size
: number of bytes to buffer, this can potentially make reading more efficient.cache_dir
: a directory in which to cache files for reading, only valid ifstreaming=False
andmode
is in"r", "rb"
. You are reponsible for cleaning up the cache directory.
Some are inspired by existing os.path
and shutil
functions:
copy
- copy a file from one path to another, will do a remote copy between two remote paths on the same blob storage serviceexists
- returnsTrue
if the file or directory existsglob
- return files matching a glob-style pattern as a generator. Globs can have surprising performance characteristics when used with blob storage. Character ranges are not supported in patterns.isdir
- returnsTrue
if the path is a directorylistdir
- list contents of a directory as a generatormakedirs
- ensure that a directory and all parent directories existremove
- remove a filermdir
- remove an empty directoryrmtree
- remove a directory treestat
- get the size and modification time of a filewalk
- walk a directory tree with a generator that yields(dirpath, dirnames, filenames)
tuplesbasename
- get the final component of a pathdirname
- get the path except for the final componentjoin
- join 2 or more paths together, inserting directory separators between each component
There are a few bonus functions:
get_url
- returns a url for a path along with the expiration for that url (or None)md5
- get the md5 hash for a path, for GCS this is fast, but for other backends this may be slowset_log_callback
- set a log callback functionlog(msg: string)
to use instead of printing to stdout
Errors
Error
- base class for library-specific exceptionsRequestFailure
- a request has failed permanently, hasmessage:str
,request:Request
, andresponse:urllib3.HTTPResponse
attributes.- The following generic exceptions are raised from some functions to make the behavior similar to the original versions:
FileNotFoundError
,FileExistsError
,IsADirectoryError
,NotADirectoryError
,OSError
,ValueError
,io.UnsupportedOperation
Logging
In order to make diagnosing stalls easier, blobfile
will log when retrying requests. blobfile
will keep retrying transient errors until they succeed or a permanent error is encountered (which will raise an exception).
To route those log lines, use set_log_callback
to set a callback function which will be called whenever a log line should be printed. The default callback prints to stdout.
While blobfile
does not use the python logging
module, it does use urllib3
which uses that module. So if you configure the python logging
module, you may need to change the settings to adjust urllib3
's logging. To only log errors from urllib3
, you can do logging.getLogger("urllib3").setLevel(logging.ERROR)
.
Examples
Write and read a file:
import blobfile as bf
with bf.BlobFile("gs://my-bucket/file.name", "wb") as f:
f.write(b"meow")
print("exists:", bf.exists("gs://my-bucket/file.name"))
print("contents:", bf.BlobFile("gs://my-bucket/file.name", "rb").read())
Parallel execution:
import blobfile as bf
import multiprocessing as mp
import tqdm
filenames = [f"{i}.ext" for i in range(1000)]
with mp.Pool() as pool:
for filename, exists in tqdm.tqdm(zip(filenames, pool.imap(bf.exists, filenames)), total=len(filenames)):
pass
Parallel download of a single file:
import blobfile as bf
import concurrent.futures
import time
def _download_chunk(path, start, size):
with bf.BlobFile(path, "rb") as f:
f.seek(start)
return f.read(size)
def parallel_download(path, chunk_size=16 * 2**20):
pieces = []
stat = bf.stat(path)
with concurrent.futures.ProcessPoolExecutor() as executor:
start = 0
futures = []
while start < stat.size:
future = executor.submit(_download_chunk, path, start, chunk_size)
futures.append(future)
start += chunk_size
for future in futures:
pieces.append(future.result())
return b"".join(pieces)
def main():
contents = parallel_download("<path to file>")
if __name__ == "__main__":
main()
Parallel copytree:
import blobfile as bf
import concurrent.futures
import tqdm
def _perform_op(op_tuple):
op, src, dst = op_tuple
if op == "copy":
bf.copy(src, dst, overwrite=True)
elif op == "mkdir":
bf.makedirs(dst)
else:
raise Exception(f"invalid op {op}")
def copytree(src, dst):
"""
Copy a directory tree from one location to another
"""
if not bf.isdir(src):
raise NotADirectoryError(f"The directory name is invalid: '{src}'")
assert not dst.startswith(src), "dst cannot be a subdir of src"
if not src.endswith("/"):
src += "/"
bf.makedirs(dst)
with tqdm.tqdm(desc="listing") as pbar:
ops = []
# walk with topdown=False should be faster for nested directory trees
for src_root, dirnames, filenames in bf.walk(src, topdown=False):
relpath = src_root[len(src):]
dst_root = bf.join(dst, relpath)
if len(filenames) == 0:
# only make empty directories, other directories will be implicitly created by copy
ops.append(("mkdir", src_root, dst_root))
pbar.update(1)
# on GCS we can have a directory name that has the same name as a file
# if that's the case, skip it since that's too confusing
skip_filenames = set(dirnames)
for filename in filenames:
if filename in skip_filenames:
continue
src_path = bf.join(src_root, filename)
dst_path = bf.join(dst_root, filename)
ops.append(("copy", src_path, dst_path))
pbar.update(1)
with concurrent.futures.ProcessPoolExecutor() as executor:
list(tqdm.tqdm(executor.map(_perform_op, ops), total=len(ops), desc="copying"))
def main():
contents = copytree("<path to source>", "<path to destination>")
if __name__ == "__main__":
main()
Changes
See CHANGES.md
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distribution
File details
Details for the file blobfile-0.16.5-py3-none-any.whl
.
File metadata
- Download URL: blobfile-0.16.5-py3-none-any.whl
- Upload date:
- Size: 41.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/1.13.0 pkginfo/1.5.0.1 requests/2.22.0 setuptools/41.2.0 requests-toolbelt/0.9.1 tqdm/4.36.1 CPython/3.7.3
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 620bd5f1c97f4f8c55a080f46133fe4a376531b4b6edfffde50c0d01f1fd3a2f |
|
MD5 | 81b46e44c57636239cfeef1947db6d2d |
|
BLAKE2b-256 | b31e7c7c6d3a89f021f22d8bd17eba3bd5315f60f2755f93d470a4f9e43d6b5d |