Skip to main content

pyreaddbc package

Project description

pyreaddbc

pyreaddbc is a Python library for working with DBase database file. Legacy systems from the Brazilian Ministry of Health still uses DBF and DBC formats to Publicize data. This package was developed to help PySUS to extract data from these formats into more modern ones. Pyreaddbc can also be used to convert DBC files from any other source."

Installation

You can install pyreaddbc using pip:

pip install pyreaddbc

Usage

Reading DBC Files

Extracting the DBF from a DBC may require to specify the encoding of the original data, if known.

import pyreaddbc

dfs = pyreaddbc.read_dbc("LTPI2201.dbc", encoding="iso-8859-1")

Exporting to CSV.GZ

To export a DataFrame to a compressed CSV file (CSV.GZ), you can use pandas:

import pyreaddbc

df = pyreaddbc.read_dbc("./LTPI2201.dbc", encoding="iso-8859-1")
df.to_csv("LTPI2201.csv.gz", compression="gzip", index=False)

Exporting to Parquet

To export a DataFrame to a Parquet file, you can use the pyarrow library:

import pyreaddbc
import pyarrow.parquet as pq
import pandas as pd
from pathlib import Path

# Read DBC file and convert to DataFrame
df = pyreaddbc.read_dbc("./LTPI2201.dbc", encoding="iso-8859-1")

# Export to CSV.GZ
df.to_csv("LTPI2201.csv.gz", compression="gzip", index=False)

# Export to Parquet
pq.write_table(pa.Table.from_pandas(df), "parquets/LTPI2201.parquet")

# Read the Parquet files and decode DataFrame columns
parquet_dir = Path("parquets")
parquets = parquet_dir.glob("*.parquet")

chunks_list = [
    pd.read_parquet(str(f), engine='fastparquet') for f in parquets
]

# Concatenate DataFrames
df_parquet = pd.concat(chunks_list, ignore_index=True)

License

GNU Affero General Public License (AGPL-3.0)

This license ensures that the software remains open-source and free to use, modify, and distribute while requiring that any changes or enhancements made to the codebase are also made available to the community under the same terms.

Acknowledge
============

This program decompresses .dbc files to .dbf. This code is based on the work of Mark Adler madler@alumni.caltech.edu (zlib/blast), Pablo Fonseca (https://github.com/eaglebh/blast-dbf).

PySUS has further extended and adapted this code to create pyreaddbc. The original work of Mark Adler and Pablo Fonseca is much appreciated for its contribution to this project.

Note: pyreaddbc is maintained with funding from AlertaDengue.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pyreaddbc-1.2.0.tar.gz (57.7 kB view details)

Uploaded Source

Built Distribution

pyreaddbc-1.2.0-cp311-cp311-manylinux_2_37_x86_64.whl (64.7 kB view details)

Uploaded CPython 3.11 manylinux: glibc 2.37+ x86-64

File details

Details for the file pyreaddbc-1.2.0.tar.gz.

File metadata

  • Download URL: pyreaddbc-1.2.0.tar.gz
  • Upload date:
  • Size: 57.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.6.1 CPython/3.11.5 Linux/6.2.0-32-generic

File hashes

Hashes for pyreaddbc-1.2.0.tar.gz
Algorithm Hash digest
SHA256 5a4733ceeeec2409829e281e738d69e063f5dbdd38b05fb6254d7e8454a0fe80
MD5 40a43ecbe08c8d50ef1707675a0ebda9
BLAKE2b-256 8814bd7247fb5882fa5834b00ae33799c26941a5db9985beecabcc9c375dd231

See more details on using hashes here.

Provenance

File details

Details for the file pyreaddbc-1.2.0-cp311-cp311-manylinux_2_37_x86_64.whl.

File metadata

File hashes

Hashes for pyreaddbc-1.2.0-cp311-cp311-manylinux_2_37_x86_64.whl
Algorithm Hash digest
SHA256 48446cbd497da0b4ec2ad272c050cfad366844af5da8fd7113851c8856e40ace
MD5 6d517a5be942cf43da9def8c4b0243a0
BLAKE2b-256 08d4b4a5d5e0354e966d51bbce00be0df67dedd797d76808c152b90af7c6fac3

See more details on using hashes here.

Provenance

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page