Skip to main content

Supplementary data about languages used by the langcodes module

Project description

language_data: a supplement to langcodes

This package is not meant to be used on its own. Please see langcodes for documentation.

language_data is a supplement to the langcodes module, for working with standardized codes for human languages. It stores the more bulky and hard-to-index data about languages, particularly what they are named in various languages.

For example, this stores the data that tell you that the code "en" means "English" in English, or that "francés" is the Spanish (es) name for French (fr).

The functions and test cases for working with this data are in langcodes, because working with the data correctly requires parsing language codes.

Data

The data included in this package is:

  • The names of various languages, in various languages
  • The estimated population that speaks each language
  • The estimated population that writes each language

These are all extracted from the Unicode CLDR data package, version 40, plus a few additional language names that fill in gaps in CLDR.

Caveats

  • The estimates for "writing population" are often overestimates, as described in the CLDR documentation on territory data. In most cases, they are derived from published data about literacy rates in the places where those languages are spoken. This doesn't take into account that many literate people around the world speak a language that isn't typically written, and write in a different language.

  • The writing systems of Chinese erase most (but not all) of the distinctions between spoken Chinese languages. You'll see separate estimates of the writing population for Cantonese, Mandarin, Wu, and so on, even though you'll likely consider these all to be zh when written.

  • CLDR doesn't have language population data for sign languages. Sign languages end up with a speaking_population() and writing_population() of 0, and I suppose that is literally true, but there's no data from which we could provide a signing_population() method.

Dependencies

language_data has a dependency on the marisa-trie package so that it can load a compact, efficient data structure for looking up language names.

Installation

language_data is usually installed as a dependency of langcodes, and doesn't make much sense without it. You can pip install language_data anyway if you want.

To install the language_data package in editable mode, run poetry install in the package root. (This is the equivalent of pip install -e ., which will hopefully become compatible again soon via PEP 660.)

Update CLDR data

  • Make sure submodules are up to date: git submodule update --init
  • Download CLDR data from https://cldr.unicode.org/index/downloads/
  • Unzip and copy supplemental/languageInfo.xml and supplemental/supplementalData.xml into language_data/data
  • cd language_data && ../.venv/bin/python build_data.py

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

language_data-1.3.0.tar.gz (5.1 MB view details)

Uploaded Source

Built Distribution

language_data-1.3.0-py3-none-any.whl (5.4 MB view details)

Uploaded Python 3

File details

Details for the file language_data-1.3.0.tar.gz.

File metadata

  • Download URL: language_data-1.3.0.tar.gz
  • Upload date:
  • Size: 5.1 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.1.1 CPython/3.12.7

File hashes

Hashes for language_data-1.3.0.tar.gz
Algorithm Hash digest
SHA256 7600ef8aa39555145d06c89f0c324bf7dab834ea0b0a439d8243762e3ebad7ec
MD5 92428e3e3579646bfc9326467c45e8ac
BLAKE2b-256 ddce3f144716a9f2cbf42aa86ebc8b085a184be25c80aa453eea17c294d239c1

See more details on using hashes here.

Provenance

The following attestation bundles were made for language_data-1.3.0.tar.gz:

Publisher: cd.yml on georgkrause/language_data

Attestations:

File details

Details for the file language_data-1.3.0-py3-none-any.whl.

File metadata

File hashes

Hashes for language_data-1.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 e2ee943551b5ae5f89cd0e801d1fc3835bb0ef5b7e9c3a4e8e17b2b214548fbf
MD5 a61bdae695eecd6b9c35200b90510e67
BLAKE2b-256 5de95a5ffd9b286db82be70d677d0a91e4d58f7912bb8dd026ddeeb4abe70679

See more details on using hashes here.

Provenance

The following attestation bundles were made for language_data-1.3.0-py3-none-any.whl:

Publisher: cd.yml on georgkrause/language_data

Attestations:

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page