Skip to main content

Integrated CSV to RDF converter, using CSVW and nanopublications

Project description

CSV on the Web (CoW)

CoW is a tool to convert a .csv file into Linked Data. Specifically, CoW is an integrated CSV to RDF converter using the W3C standard CSVW for rich semantic table specificatons, producing nanopublications as an output RDF model. CoW converts any CSV file into an RDF dataset.

Features

Documentation and support

For user documentation see the basic introduction video and the GitHub wiki. Technical details are provided below. If you encounter an issue then please report it. Also feel free to create pull requests.

Quick Start Guide

There are two ways to run CoW. The quickest is via Docker, the more flexible via PIP.

Docker Image

Several data science tools, including CoW, are available via a Docker image.

Install

First, install the Docker virtualisation engine on your computer. Instructions on how to accomplish this can be found on the official Docker website. Use the following command in the Docker terminal:

# docker pull wxwilcke/datalegend

Here, the #-symbol refers to the terminal of a user with administrative privileges on your machine and is not part of the command.

After the image has successfully been downloaded (or 'pulled'), the container can be run as follows:

# docker run --rm -p 3000:3000 -it wxwilcke/datalegend

The virtual system can now be accessed by opening http://localhost:3000/wetty in your preferred browser, and by logging in using username datalegend and password datalegend.

For detailed instructions on this Docker image, see DataLegend Playground. For instructions on how to use the tool, see usage below.

Command Line Interface (CLI)

The Command Line Interface (CLI) is the recommended way of using CoW for most users.

Install

Check whether the latest version of Python is installed on your device. For Windows/MacOS we recommend to install Python via the official distribution page.

The recommended method of installing CoW on your system is pip3:

pip3 install cow-csvw

You can upgrade your currently installed version with:

pip3 install cow-csvw --upgrade

Possible installation issues:

  • Permission issues. You can get around them by installing CoW in user space: pip3 install cow-csvw --user.
  • Cannot find command: make sure your binary user directory (typically something like /Users/user/Library/Python/3.7/bin in MacOS or /home/user/.local/bin in Linux) is in your PATH (in MacOS: /etc/paths).
  • Please report your unlisted issue.

Usage

Start the graphical interface by entering the following command:

cow_tool

Select a CSV file and click build to generate a file named myfile.csv-metadata.json (JSON schema file) with your mappings. Edit this file (optional) and then click convert to convert the CSV file to RDF. The output should be a myfile.csv.nq RDF file (nquads by default).

Command Line Interface

The straightforward CSV to RDF conversion is done by entering the following commands:

cow_tool_cli build myfile.csv

This will create a file named myfile.csv-metadata.json (JSON schema file). Next:

cow_tool_cli convert myfile.csv

This command will output a myfile.csv.nq RDF file (nquads by default).

You don't need to worry about the JSON file, unless you want to change the metadata schema. To control the base URI namespace, URIs used in predicates, virtual columns, etcetera, edit the myfile.csv-metadata.json file and/or use CoW commands. For instance, you can control the output RDF serialization (with e.g. --format turtle). Have a look at the options below, the examples in the GitHub wiki, and the technical documentation.

Options

Check the --help for a complete list of options:

usage: cow_tool_cli [-h] [--dataset DATASET] [--delimiter DELIMITER]
                    [--quotechar QUOTECHAR] [--encoding ENCODING] [--processes PROCESSES]
                    [--chunksize CHUNKSIZE] [--base BASE]
                    [--format [{xml,n3,turtle,nt,pretty-xml,trix,trig,nquads}]]
                    [--gzip] [--version]
                    {convert,build} file [file ...]

Not nearly CSVW compliant schema builder and RDF converter

positional arguments:
  {convert,build}       Use the schema of the `file` specified to convert it
                        to RDF, or build a schema from scratch.
  file                  Path(s) of the file(s) that should be used for
                        building or converting. Must be a CSV file.

optional arguments:
  -h, --help            show this help message and exit
  --dataset DATASET     A short name (slug) for the name of the dataset (will
                        use input file name if not specified)
  --delimiter DELIMITER
                        The delimiter used in the CSV file(s)
  --quotechar QUOTECHAR
                        The character used as quotation character in the CSV
                        file(s)
  --encoding ENCODING   The character encoding used in the CSV file(s)

  --processes PROCESSES
                        The number of processes the converter should use
  --chunksize CHUNKSIZE
                        The number of rows processed at each time
  --base BASE           The base for URIs generated with the schema (only
                        relevant when `build`ing a schema)
  --gzip 				Compress the output file using gzip
  --format [{xml,n3,turtle,nt,pretty-xml,trix,trig,nquads}], -f [{xml,n3,turtle,nt,pretty-xml,trix,trig,nquads}]
                        RDF serialization format
  --version             show program's version number and exit

Library

Once installed, CoW can be used as a library as follows:

from cow_csvw.csvw_tool import COW
import os

COW(mode='build', files=[os.path.join(path, filename)], dataset='My dataset', delimiter=';', quotechar='\"')

COW(mode='convert', files=[os.path.join(path, filename)], dataset='My dataset', delimiter=';', quotechar='\"', processes=4, chunksize=100, base='http://example.org/my-dataset', format='turtle', gzipped=False)

Further Information

Examples

The GitHub wiki provides more hands-on examples of transposing CSVs into Linked Data.

Technical documentation

Technical documentation for CoW are maintained in this GitHub repository (under ), and published through Read the Docs at http://csvw-converter.readthedocs.io/en/latest/.

To build the documentation from source, change into the docs directory, and run make html. This should produce an HTML version of the documentation in the _build/html directory.

License

MIT License (see license.txt)

Acknowledgements

Authors: Albert Meroño-Peñuela, Roderick van der Weerdt, Rinke Hoekstra, Kathrin Dentler, Auke Rijpma, Richard Zijdeman, Melvin Roest, Xander Wilcke

Copyright: Vrije Universiteit Amsterdam, Utrecht University, International Institute of Social History

CoW is developed and maintained by the CLARIAH project and funded by NWO.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

cow_csvw-2.0.0.tar.gz (59.9 kB view details)

Uploaded Source

File details

Details for the file cow_csvw-2.0.0.tar.gz.

File metadata

  • Download URL: cow_csvw-2.0.0.tar.gz
  • Upload date:
  • Size: 59.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.11.7

File hashes

Hashes for cow_csvw-2.0.0.tar.gz
Algorithm Hash digest
SHA256 a762bfb0b1db578bd63bffd670f4a7372e071a75d1cf15393e5fc4c71de09f52
MD5 bee7ed0bf5a59c0393e1eb63d3322805
BLAKE2b-256 14246eb7a76f272b61b597627720acdd4507e261e8403f8c18287ee9f0df9b56

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page