Skip to main content

a Danish preprocessing pipeline trained in SpaCy. At the time of writing it has achieved State-of-the-Art performance on all Benchmark tasks for Danish

Project description

DaCy: A SpaCy NLP Pipeline for Danish

release version python version Code style: black license spacy github actions pytest github coverage Known Vulnerabilities github actions docs CodeFactor

DaCy is a Danish preprocessing pipeline trained in SpaCy. It has achieved State-of-the-Art performance on Named entity recognition, part-of-speech tagging and dependency parsing for Danish. This repository contains material for using the DaCy, reproducing the results and guides on usage of the package. Furthermore, it also a behavioural test for biases and robustness of Danish NLP pipelines.

📰 News

  • 1.0.0 (09/07/21)
    • DaCy version 1.0.0 releases.
      • Including a series of augmenters with a few specifically designed for Danish
      • Code for behavioural tests of NLP pipelines
      • A new tutorial for both 📖
    • The first paper on DaCy check it out as a preprint here and code for reproducing it! 🌟
    • A new beautiful hand-drawn logo 🤩
    • A behavioural test for biases and robustness in Danish NLP pipelines 🧐
  • 0.4.1 (03/06/21)
  • 0.3.1 (01/06/21)
    • DaCy's tests now cover 99% of its codebase 🎉
    • DaCy's test suite is now being applied for all major operating systems instead of just linux 👩‍💻
  • 0.2.2 (25/05/21)
    • The new Danish Model Senda was added to DaCy
  • 0.2.1 (30/03/21)
    • DaCy now includes a small model for efficient processing based on the Danish Ælæctra 🏃
See older news items
  • 0.1.1 (24/03/21)
    • DaCy included wrapped version on major Danish sentiment analysis software including the models by DaNLP. As well as code for wrapping any sequence classification model into its pipeline 🤩
    • Tutorials is added to introduce the above functionality
  • 0.0.1 (25/02/21)
    • DaCy launches with a medium-sized and a large language model obtaining state-of-the-art on Named entity recognition, part-of-speech tagging and dependency parsing for Danish 🇩🇰

🔧 Installation

it currently only possible to download DaCy directly from GitHub, however this can be done quite easily using:

pip install git+https://github.com/KennethEnevoldsen/DaCy
Detailed instructions

Install from source

git clone https://github.com/KennethEnevoldsen/DaCy.git
cd DaCy
pip install .

👩‍💻 Usage

To use the model you first have to download either the small, medium, or large model. To see a list of all available models:

import dacy
for model in dacy.models():
    print(model)
# da_dacy_small_tft-0.0.0
# da_dacy_medium_tft-0.0.0
# da_dacy_large_tft-0.0.0

To download and load a model simply execute:

nlp = dacy.load("da_dacy_medium_tft-0.0.0")

Which will download the model to the .dacy directory in your home directory.

To download the model to a specific directory:

dacy.download_model("da_dacy_medium_tft-0.0.0", your_save_path)
nlp = dacy.load_model("da_dacy_medium_tft-0.0.0", your_save_path)

For more on how to use DaCy please check out our [documentation)

👩‍🏫 Tutorials and documentation

DaCy also includes a detailed documentation as well as a series of Jupyter notebook tutorial. If you do not have Jupyter Notebook installed, instructions for installing and running it can be found here. All the tutorials are located in the tutorials folder.

Content Google Colab
🌟 Getting Started An introduction on how to use DaCy
📖 Documentation The Documentation of DaCy
😡😂 Sentiment A simple introduction to the new sentiment features in DaCy. Open In Colab
😎 wrapping a fine-tuned Tranformer A guide on how to wrap an already fine-tuned transformer to and add it to your SpaCy pipeline using DaCy helper functions. Open In Colab

🦾 Performance and Training

The following table shows the performance on the DaNE test set when compared to other models. Highest scores are highlighted with bold and second highest is underlined.

Stanza uses the spacy-stanza implementation. The speed on the DaNLP model is as reported by the framework, which does not utilize batch input. However, given the model size, it can be expected to reach speeds comparable to DaCy medium. Empty cells indicate that the framework does not include the specific model.

Training and reproduction

the folder training contains a SpaCy project which will allow for a reproduction of the results. This folder also includes the evaluation metrics on DaNE and scripts for downloading the required data. For more information please see the training readme.

Want to learn more about how DaCy initially came to be, check out this blog post.

Robustness and Biases

DaCy compares the performance of Danish language processing pipeline under a large variaty of augmentations to test the robustness and biases hereof. To find out more please check the website.

🤔 Issues and Usage Q&A

To ask questions, report issues or request features, please use the GitHub Issue Tracker. Question related to SpaCy is kindly referred to the SpaCy GitHub or forum.

FAQ

Where is my DaCy model located?

To figure out where your DaCy model is located you can always use:

where_is_my_dacy()
Why doesn't the performance metrics match the performance metrics reported on the DaNLP GitHub?

The performance metrics by DaNLP gives the model the 'gold standard' tokenization of the dataset as opposed to having the pipeline tokenize the text itself. This allows for comparison of the models on an even ground regardless of their tokenizer but inflated the performance in general. DaCy on the other hand reports the performance metrics using a tokenizer this makes the result closer to something you would see on a real dataset and does reflect how tokenization influence your performance. All models tested was tested either using their own tokenizer or SpaCy Danish tokenizer depending on which performed the best. All models except Stanza and Polyglot were found to perform best with the SpaCy tokenizer.

How do i test the code and run the test suite?

DaCy comes with an extensive test suite. In order to run the tests, you'll usually want to clone the repository and build DaCy from the source. This will also install the required development dependencies and test utilities defined in the requirements.txt.

pip install -r requirements.txt
pip install pytest

python -m pytest

which will run all the test in the dacy/tests folder.

To run a specific test for instance if you wish to run the test on the readability functions, you can run:

python -m pytest dacy/tests/test_readability.py

Code Coverage If you want to check code coverage as well you can run the following:

pip install pytest-cov

python -m pytest--cov=.
Why is vaderSentiment_da.py being excluded in the coverage test?

It is excluded as the functionality is intended to move to another repository called sentida2, which is currently under development.

Does DaCy run on X?

DaCy is intended to run on all major OS, this includes Windows (latest version), MacOS (Catalina) and the latest version of Linux (Ubuntu). Below you can see if DaCy passes its test suite for the system of interest. The first one indicated Linux. Please note these are only the systems DaCy is being actively tested on, if you run on a similar system (e.g. an earlier version of Linux) DaCy will likely run there as well.

Operating System Status
Ubuntu (Latest) github actions pytest ubuntu
MacOS (Catalina) github actions pytest catalina
Windows (Latest) github actions pytest windows
How is the documentation generated?

DaCy uses sphinx to generate documentation. It uses the Furo theme with a custom styling.

To make the documentation you can run:

# install sphinx, themes and extensions
pip install sphinx furo sphinx-copybutton sphinxext-opengraph

# generate html from documentations

make -C docs html

Acknowledgements

DaCy is a result of great open-source software and contributors. It wouldn't have been possible without the work by the SpaCy team which developed an integrated the software. Huggingface for developing Transformers and making model sharing convenient. BotXO for training and sharing the Danish BERT model and Malte Hojmark-Bertelsen for making it easily available and developing Ælæctra. A huge compliment also goes out to DaNLP which have made it easy to get access to Danish resources and even supplied some of the tagged data themselves.

References

If you use this library in your research, please kindly cite:

@inproceedings{dacy2021,
    title={DaCy: A Unified Framework for Danish NLP},
    author={Enevoldsen, Kenneth},
    year={2021}
}

To read more on this paper or to see the code for reproducing the results please check out the associated readme.

License

DaCy is released under the Apache License, Version 2.0. See the LICENSE file for more details.

Contact

For feature request issues and bugs please use the GitHub Issue Tracker. Otherwise, please use the Discussion Forums.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

dacy-0.4.2.tar.gz (146.1 kB view details)

Uploaded Source

Built Distribution

dacy-0.4.2-py3-none-any.whl (148.6 kB view details)

Uploaded Python 3

File details

Details for the file dacy-0.4.2.tar.gz.

File metadata

  • Download URL: dacy-0.4.2.tar.gz
  • Upload date:
  • Size: 146.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/4.6.1 pkginfo/1.7.1 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.61.2 CPython/3.9.6

File hashes

Hashes for dacy-0.4.2.tar.gz
Algorithm Hash digest
SHA256 506a0e6755c6d5c24a859514f1535a70bddbe581f465ae18a0b27f4b659d65aa
MD5 08a7b32d9a0c0ec847b15e22fbf14079
BLAKE2b-256 aa7ff37867d0dfe55e921f257651fd090742d9ca39f56891a5d6a91bbdc4a293

See more details on using hashes here.

File details

Details for the file dacy-0.4.2-py3-none-any.whl.

File metadata

  • Download URL: dacy-0.4.2-py3-none-any.whl
  • Upload date:
  • Size: 148.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/4.6.1 pkginfo/1.7.1 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.61.2 CPython/3.9.6

File hashes

Hashes for dacy-0.4.2-py3-none-any.whl
Algorithm Hash digest
SHA256 f99a3a5c1d2abf4f0f37fa4668ef46d4201abc31e6f648a85a06f26649da7f1f
MD5 11a95492321f44dd755c0fa858ee2ee7
BLAKE2b-256 12f6f8f2040b78f5722ed67c0fa6be8ecb1fa229586de2c18133569373eb3ad6

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page