Read GCS, ABS and local paths with the same interface, clone of tensorflow.io.gfile
Project description
blobfile
This is a standalone clone of TensorFlow's gfile
, supporting local paths, Google Cloud Storage paths (gs://<bucket>
), and Azure Blobs paths (https://<account>.blob.core.windows.net/<container>/
or az://<account>/<container>
).
The main function is BlobFile
, a replacement for GFile
. There are also a few additional functions, basename
, dirname
, and join
, which mostly do the same thing as their os.path
namesakes, only they also support GCS paths and Azure Storage paths.
Installation
pip install blobfile
Usage
# write a file, then read it back
import blobfile as bf
with bf.BlobFile("gs://my-bucket-name/cats", "wb") as f:
f.write(b"meow!")
print("exists:", bf.exists("gs://my-bucket-name/cats"))
with bf.BlobFile("gs://my-bucket-name/cats", "rb") as f:
print("contents:", f.read())
There are also some examples processing many blobs in parallel.
Here are the functions in blobfile
:
BlobFile
- likeopen()
but works with remote paths too, data can be streamed to/from the remote file. It accepts the following arguments:streaming
:- The default for
streaming
isTrue
whenmode
is in"r", "rb"
andFalse
whenmode
is in"w", "wb", "a", "ab"
. streaming=True
:- Reading is done without downloading the entire remote file.
- Writing is done to the remote file directly, but only in chunks of a few MB in size.
flush()
will not cause an early write. - Appending is not implemented.
streaming=False
:- Reading is done by downloading the remote file to a local file during the constructor.
- Writing is done by uploading the file on
close()
or during destruction. - Appending is done by downloading the file during construction and uploading on
close()
.
- The default for
buffer_size
: number of bytes to buffer, this can potentially make reading more efficient.cache_dir
: a directory in which to cache files for reading, only valid ifstreaming=False
andmode
is in"r", "rb"
. You are reponsible for cleaning up the cache directory.
Some are inspired by existing os.path
and shutil
functions:
copy
- copy a file from one path to another, this will do a remote copy between two remote paths on the same blob storage serviceexists
- returnsTrue
if the file or directory existsglob
/scanglob
- return files matching a glob-style pattern as a generator. Globs can have surprising performance characteristics when used with blob storage. Character ranges are not supported in patterns.isdir
- returnsTrue
if the path is a directorylistdir
/scandir
- list contents of a directory as a generatormakedirs
- ensure that a directory and all parent directories existremove
- remove a filermdir
- remove an empty directoryrmtree
- remove a directory treestat
- get the size and modification time of a filewalk
- walk a directory tree with a generator that yields(dirpath, dirnames, filenames)
tuplesbasename
- get the final component of a pathdirname
- get the path except for the final componentjoin
- join 2 or more paths together, inserting directory separators between each component
There are a few bonus functions:
get_url
- returns a url for a path (usable by an HTTP client without any authentication) along with the expiration for that url (or None)md5
- get the md5 hash for a path, for GCS this is often fast, but for other backends this may be slow. On Azure, if the md5 of a file is calculated and is missing from the file, the file will be updated with the calculated md5.set_mtime
- set the modified timestamp for a fileconfigure
- set global configuration options for blobfilelog_callback=_default_log_fn
: a log callback functionlog(msg: string)
to use instead of printing to stdoutconnection_pool_max_size=32
: the max size for each per-host connection poolmax_connection_pool_count=10
: the maximum count of per-host connection poolsazure_write_chunk_size=8 * 2 ** 20
: the size of blocks to write to Azure Storage blobs in bytes, can be set to a maximum of 100MB. This determines both the unit of request retries as well as the maximum file size, which is50,000 * azure_write_chunk_size
.google_write_chunk_size=8 * 2 ** 20
: the size of blocks to write to Google Cloud Storage blobs in bytes, this only determines the unit of request retries.retry_log_threshold=0
: set a retry count threshold above which to log failures to the log callback functionconnect_timeout=10
: the maximum amount of time (in seconds) to wait for a connection attempt to a server to succeed, set to None to wait foreverread_timeout=30
: the maximum amount of time (in seconds) to wait between consecutive read operations for a response from the server, set to None to wait foreveroutput_az_paths=False
: outputaz://
paths instead of using thehttps://
for azureuse_azure_storage_account_key_fallback=True
: fallback to storage account keys for azure containers, having this enabled (the default) requires listing your subscriptions and may run into 429 errors if you hit the low azure quotas for subscription listing
Authentication
Google Cloud Storage
The following methods will be tried in order:
- Check the environment variable
GOOGLE_APPLICATION_CREDENTIALS
for a path to service account credentials in JSON format. - Check for "application default credentials". To setup application default credentials, run
gcloud auth application-default login
. - Check for a GCE metadata server (if running on GCE) and get credentials from that service.
Azure Blobs
The following methods will be tried in order:
- Check the environment variable
AZURE_STORAGE_KEY
for an azure storage account key (these are per-storage account shared keys described in https://docs.microsoft.com/en-us/azure/storage/common/storage-account-keys-manage) - Check the environment variable
AZURE_APPLICATION_CREDENTIALS
which should point to JSON credentials for a service principal output by the commandaz ad sp create-for-rbac --name <name>
- Check the environment variables
AZURE_CLIENT_ID
,AZURE_CLIENT_SECRET
,AZURE_TENANT_ID
corresponding to a service principal described in the previous step but without the JSON file. - Check the environment variable
AZURE_STORAGE_CONNECTION_STRING
for an Azure Storage connection string - Use credentials from the
az
command line tool if they can be found.
If access using credentials fails, anonymous access will be tried. blobfile
supports public access for containers marked as public, but not individual blobs.
Paths
For Google Cloud Storage and Azure Blobs, directories don't really exist. These storage systems store the files in a single flat list. The "/" separators are just part of the filenames and there is no need to call the equivalent of os.mkdir
on one of these systems.
To make local behavior consistent with the remote storage systems, missing local directories will be created automatically when opening a file in write mode.
Local
These are just normal paths for the current machine, e.g. /root/hello.txt
Google Cloud Storage
GCS paths have the format gs://<bucket>/<blob>
, you cannot perform any operations on gs://
itself.
Azure Blobs
Azure Blobs URLs have the format https://<account>.blob.core.windows.net/<container>/<blob>
or az://<account>/<container>
. The highest you can go up the hierarchy is https://<account>.blob.core.windows.net/<container>/
, blobfile
cannot perform any operations on https://<account>.blob.core.windows.net/
. The https://
url is the output format, but the az://
urls are accepted as inputs.
Errors
Error
- base class for library-specific exceptionsRequestFailure(Error)
- a request has failed permanently, the status code can be found in the propertyresponse_status:int
and an error code, if available, is inerror:Optional[str]
.RestartableStreamingWriteFailure(RequestFailure)
- a streaming write has failed permanently, which requires restarting from the beginning of the stream.ConcurrentWriteFailure(RequestFailure)
- a write failed because another process was writing to the same file at the same time.- The following generic exceptions are raised from some functions to make the behavior similar to the original versions:
FileNotFoundError
,FileExistsError
,IsADirectoryError
,NotADirectoryError
,OSError
,ValueError
,io.UnsupportedOperation
Logging
blobfile
will keep retrying transient errors until they succeed or a permanent error is encountered (which will raise an exception). In order to make diagnosing stalls easier, blobfile
will log when retrying requests.
To route those log lines, use configure(log_callback=<fn>)
to set a callback function which will be called whenever a log line should be printed. The default callback prints to stdout with the prefix blobfile:
.
Using the logging
module
If you use the python logging
module, you can have blobfile
log there:
bf.configure(log_callback=logging.getLogger("blobfile").warning)
While blobfile
does not use the python logging
module by default, it does use other libraries which use that module. So if you configure the python logging
module, you may need to change the settings to adjust logging behavior:
urllib3
:logging.getLogger("urllib3").setLevel(logging.ERROR)
filelock
:logging.getLogger("filelock").setLevel(logging.ERROR)
Also, as a tip, make sure to use a format that tells you the name of the logger:
logging.basicConfig(format="%(asctime)s [%(name)s] %(levelname)s: %(message)s", level=logging.WARNING)
This will let you see which package is producing log messages.
Safety
The library should be thread safe and fork safe with the following exceptions:
- A
BlobFile
instance is not thread safe (only 1 thread should own aBlobFile
instance at a time) - Calls to
bf.configure()
are not thread-safe and should ideally happen before performing any operations
Concurrent Writers
Google Cloud Storage supports multiple writers for the same blob and the last one to finish should win. However, in the event of a large number of simultaneous writers, the service will return 429 or 503 errors and most writers will stall. In this case, write to different blobs instead.
Azure Blobs doesn't support multiple writers for the same blob. With the way BlobFile
is currently configured, the last writer to start writing will win. Other writers will get a ConcurrentWriteFailure
. In addition, all writers could fail if the file size is large and there are enough concurrent writers. In this case, you can write to a temporary blob (with a random filename), copy it to the final location, and then delete the original. The copy will be within a container so it should be fast.
Changes
See CHANGES
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.