Skip to main content

LLAMA - Loss & LAtency MAtrix

Project description

Loss & LAtency MAtrix

LLAMA is a deployable service which artificially produces traffic for measuring network performance between endpoints.

LLAMA uses UDP socket level operations to support multiple QoS classes. UDP datagrams are fast, efficient, and will hash across ECMP paths in large networks to uncover faults and erring interfaces. LLAMA is written in pure Python for maintainability.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama-0.0.1a1.tar.gz (84.8 kB view details)

Uploaded Source

Built Distribution

llama-0.0.1a1-py2-none-any.whl (16.4 kB view details)

Uploaded Python 2

File details

Details for the file llama-0.0.1a1.tar.gz.

File metadata

  • Download URL: llama-0.0.1a1.tar.gz
  • Upload date:
  • Size: 84.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No

File hashes

Hashes for llama-0.0.1a1.tar.gz
Algorithm Hash digest
SHA256 432d5bc8d0fd432f714bc582677f6fd11e9591e92e16605ac2b5af54855b6637
MD5 dbeaa87b48eb25f2d428a85d9a1264ec
BLAKE2b-256 c2c978d11e2ee12d2074343d5d0c7862d2bde8872c1d45908c1077a72e3d6899

See more details on using hashes here.

File details

Details for the file llama-0.0.1a1-py2-none-any.whl.

File metadata

File hashes

Hashes for llama-0.0.1a1-py2-none-any.whl
Algorithm Hash digest
SHA256 496ba2d95a8420cecd69575933cf4b8d3ae2e16b7519310e764f2a100e234ba6
MD5 d91e6dfbd8e22c7e72006cbda5b8a7b3
BLAKE2b-256 b9b989ce3982272ddc832a0513413a33a848681ce1f6012725a513dc21853a76

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page