Skip to main content

LLAMA - Loss & LAtency MAtrix

Project description

Loss & LAtency MAtrix

LLAMA is a deployable service which artificially produces traffic for measuring network performance between endpoints.

LLAMA uses UDP socket level operations to support multiple QoS classes. UDP datagrams are fast, efficient, and will hash across ECMP paths in large networks to uncover faults and erring interfaces. LLAMA is written in pure Python for maintainability.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama-0.0.1a2.tar.gz (11.9 kB view details)

Uploaded Source

Built Distribution

llama-0.0.1a2-py2-none-any.whl (15.9 kB view details)

Uploaded Python 2

File details

Details for the file llama-0.0.1a2.tar.gz.

File metadata

  • Download URL: llama-0.0.1a2.tar.gz
  • Upload date:
  • Size: 11.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No

File hashes

Hashes for llama-0.0.1a2.tar.gz
Algorithm Hash digest
SHA256 f6d5a0b03da3553de31fe8e16e005a60f532920ba7dc6cbe8384ec5524902bab
MD5 11469dbdb93a6d3131558ef268fc553d
BLAKE2b-256 cc0791af9ad6bbdf670bdea21fdb91d3724bb4a1b499a493bb6bc1eb85aa2c9e

See more details on using hashes here.

File details

Details for the file llama-0.0.1a2-py2-none-any.whl.

File metadata

File hashes

Hashes for llama-0.0.1a2-py2-none-any.whl
Algorithm Hash digest
SHA256 3d4eb3f7033108bbac8f83c9bb09151e94f8feccd8fdd463f4f6f5827adf65cb
MD5 9b855ebec3737ed4058d9c71bcb04b1c
BLAKE2b-256 8eeb72dc0353fa5ad6d2df88a19554b3f56a064d40aae8717542eb359d78f620

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page