Skip to main content

Backend.AI Client for Python

Project description

PyPI version Python Versions Build Status (Linux) Build Status (Windows) Code Coverage

The official API client library for Backend.AI

Usage

You should set the access key and secret key as environment variables to use the API. Grab your keypair from cloud.backend.ai or your cluster admin.

export BACKEND_ACCESS_KEY=...
export BACKEND_SECRET_KEY=...

# optional (for local clusters)
export BACKEND_ENDPOINT="https://my-precious-cluster/"

Command-line Interface

backend.ai command is the entry point of all sub commands. (Alternatively you can use a verbosely long version: python -m ai.backend.client.cli)

Highlight: run command

To run the code specified in the command line directly, use -c option to pass the code string (like a shell).

$ backend.ai run python -c "print('hello world')"
∙ Client session token: d3694dda6e5a9f1e5c718e07bba291a9
✔ Kernel (ID: zuF1OzMIhFknyjUl7Apbvg) is ready.
hello world

You can even run a C code on-the-fly. (Note that we put a dollar sign before the single-quoted code argument so that the shell to interpret '\n' as actual newlines.)

$ backend.ai run c -c $'#include <stdio.h>\nint main() {printf("hello world\\n");}'
∙ Client session token: abc06ee5e03fce60c51148c6d2dd6126
✔ Kernel (ID: d1YXvee-uAJTx4AKYyeksA) is ready.
hello world

For larger programs, you may upload multiple files and then build & execute them. The below is a simple example to run a sample C program.

$ git clone https://gist.github.com/achimnol/df464c6a3fe05b21e9b06d5b80e986c5 c-example
Cloning into 'c-example'...
Unpacking objects: 100% (5/5), done.
$ cd c-example
$ backend.ai run c main.c mylib.c mylib.h
∙ Client session token: 1c352a572bc751a81d1f812186093c47
✔ Kernel (ID: kJ6CgWR7Tz3_v2WsDHOwLQ) is ready.
✔ Uploading done.
✔ Build finished.
myvalue is 42
your name? LABLUP
hello, LABLUP!

Please refer the --help manual provided by the run command.

You may use a shortcut command lcc and lpython instead of typing the full Python module path like:

$ lcc main.c mylib.c mylib.h

Since the client version 1.1.5, the sessions are no longer automatically cleaned up. To do that, add --rm option to the run command, like Docker CLI.

Highlight: ps and terminate command

You can see the list of currently running sessions using your API keypair.

$ backend.ai ps
Session ID                        Lang/runtime    Created At                        Termianted At    Status      Memory Slot    CPU Slot    GPU Slot
--------------------------------  --------------  --------------------------------  ---------------  --------  -------------  ----------  ----------
5baafb2136029228ca9d873e1f2b4f6a  python:latest   2018-01-09T04:32:21.962223+00:00                   RUNNING            1024           1           0

If you set -t option in the run command, it will be used as the session ID—you may use it to assign a human-readable, easy-to-type alias for your sessions. These session IDs can be reused after the current session using the same ID terminates.

To terminate a session,

$ backend.ai terminate 5baafb2136029228ca9d873e1f2b4f6a
✔ Done.

Highlight: proxy command

To use API development tools such as GraphiQL for the admin API, run an insecure local API proxy. This will attach all the necessary authorization headers to your vanilla HTTP API requests.

$ backend.ai proxy
∙ Starting an insecure API proxy at http://localhost:8084

More commands?

Please run backend.ai --help to see more commands.

Synchronous API

from ai.backend.client import Kernel

kern = Kernel.get_or_create('lua5', client_token='abc')
result = kern.execute('print("hello world")', mode='query')
print(result['console'])
kern.destroy()

You need to take care of client_token because it determines whether to reuse kernel sessions or not. Sorna cloud has a timeout so that it terminates long-idle kernel sessions, but within the timeout, any kernel creation requests with the same client_token let Sorna cloud to reuse the kernel.

Asynchronous API

import asyncio
from ai.backend.client.asyncio import AsyncKernel

async def main():
    kern = await AsyncKernel.get_or_create('lua5', client_token='abc')
    result = await kern.execute('print("hello world")', mode='query')
    print(result['console'])
    await kern.destroy()

loop = asyncio.get_event_loop()
try:
    loop.run_until_complete(main())
finally:
    loop.close()

All the methods of AsyncKernel objects are exactly same to the synchronous version, except that they are coroutines.

Additionally, AsyncKernel offers async-only method stream_pty(). It returns a StreamPty object which allows you to access a pseudo-tty of the kernel. StreamPty works like an async-generator and provides methods to send stdin inputs as well as resize the terminal.

Troubleshooting (FAQ)

  • There are error reports related to simplejson with Anaconda on Windows. This package no longer depends on simplejson since v1.0.5, so you may uninstall it safely since Python 3.5+ offers almost identical json module in the standard library.

    If you really need to keep the simplejson package, uninstall the existing simplejson package manually and try reinstallation of it by downloading a pre-built binary wheel from here.

Project details


Release history Release notifications | RSS feed

This version

1.3.3

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

backend.ai-client-1.3.3.tar.gz (24.4 kB view details)

Uploaded Source

Built Distribution

backend.ai_client-1.3.3-py3-none-any.whl (34.8 kB view details)

Uploaded Python 3

File details

Details for the file backend.ai-client-1.3.3.tar.gz.

File metadata

File hashes

Hashes for backend.ai-client-1.3.3.tar.gz
Algorithm Hash digest
SHA256 c986f6be944a8036e9f79be231178c032a98dc30372d6afe918fc3a35860159a
MD5 7b7339b2b1fe058fbf4f9ae96159475a
BLAKE2b-256 668347fecfd913ee4bc1ba1a59d10a07f21dddc5647873a44c27e53c264e91bb

See more details on using hashes here.

Provenance

File details

Details for the file backend.ai_client-1.3.3-py3-none-any.whl.

File metadata

File hashes

Hashes for backend.ai_client-1.3.3-py3-none-any.whl
Algorithm Hash digest
SHA256 c28f9561a8d652d83026c377fdb08bab4afccb1d29bcb57bb62b9217d360eb8c
MD5 5f800734d8417555d567b853257569ab
BLAKE2b-256 45fbe7ab03585cbbe397241041c5427ce88fceaf258a6960752836ca05f27654

See more details on using hashes here.

Provenance

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page