Skip to main content

This repository contains an easy and intuitive approach to zero and few-shot text classification using sentence-transformers, huggingface or word embeddings within a Spacy pipeline.

Project description

Classy few shot classification

This repository contains an easy and intuitive approach to zero and few-shot text classification using sentence-transformers, huggingface or word embeddings within a Spacy pipeline.

Why?

Huggingface does offer some nice models for few/zero-shot classification, but these are not tailored to multi-lingual approaches. Rasa NLU has a nice approach for this, but its too embedded in their codebase for easy usage outside of Rasa/chatbots. Additionally, it made sense to integrate sentence-transformers and Hugginface zero-shot, instead of default word embeddings. Finally, I decided to integrate with Spacy, since training a custom Spacy TextCategorizer seems like a lot of hassle if you want something quick and dirty.

Install

pip install classy-classification

Quickstart

Take a look at the examples directory. Use data from any language. And choose a model from sentence-transformers or from Hugginface zero-shot.

import classy_classification


data = {
    "furniture": ["This text is about chairs.",
               "Couches, benches and televisions.",
               "I really need to get a new sofa."],
    "kitchen": ["There also exist things like fridges.",
                "I hope to be getting a new stove today.",
                "Do you also have some ovens."]
}

nlp = spacy.blank("en")

classification_type = "spacy_few_shot"
if classification_type == "spacy_few_shot":
    nlp.add_pipe("text_categorizer", 
        config={"data": data, "model": "spacy"}
    ) 
elif classification_type == "sentence_transformer_few_shot":
    nlp.add_pipe("text_categorizer", 
        config={"data": data, "model": "sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2"}
    ) 
elif classification_type == "huggingface_zero_shot":
    nlp.add_pipe("text_categorizer", 
        config={"data": list(data.keys()), "cat_type": "zero", "model": "facebook/bart-large-mnli"}
    )

print(nlp(\"I am looking for kitchen appliances.\")._.cats)
print([doc._.cats for doc in nlp.pipe([\"I am looking for kitchen appliances.\"])])

Credits

Inspiration Drawn From

Or buy me a coffee

"Buy Me A Coffee"

More examples

Some quick and dirty training data.

training_data = {
    "politics": [
        "Putin orders troops into pro-Russian regions of eastern Ukraine.",
        "The president decided not to go through with his speech.",
        "There is much uncertainty surrounding the coming elections.",
        "Democrats are engaged in a ‘new politics of evasion’."
    ],
    "sports": [
        "The soccer team lost.",
        "The team won by two against zero.",
        "I love all sport.",
        "The olympics were amazing.",
        "Yesterday, the tennis players wrapped up wimbledon."
    ],
    "weather": [
        "It is going to be sunny outside.",
        "Heavy rainfall and wind during the afternoon.",
        "Clear skies in the morning, but mist in the evenening.",
        "It is cold during the winter.",
        "There is going to be a storm with heavy rainfall."
    ]
}

validation_data = [
    "I am surely talking about politics.",
    "Sports is all you need.",
    "Weather is amazing."
]

internal spacy word2vec embeddings

import spacy
import classy_classification

nlp = spacy.load("en_core_web_md") 
nlp.add_pipe("text_categorizer", config={"data": training_data, "model": "spacy"}) #use internal embeddings from spacy model
print(nlp(validation_data[0])._.cats)
print([doc._.cats for doc in nlp.pipe(validation_data)])

using as an individual sentence-transformer

from classy_classification import classyClassifier

classifier = classyClassifier(data=training_data)
classifier(validation_data[0])
classifier.pipe(validation_data)

# overwrite training data
classifier.set_training_data(data=new_training_data)

# overwrite [embedding model](https://www.sbert.net/docs/pretrained_models.html)
classifier.set_embedding_model(model="paraphrase-MiniLM-L3-v2")

# overwrite SVC config
classifier.set_svc(
    config={                              
        "C": [1, 2, 5, 10, 20, 100],
        "kernels": ["linear"],                              
        "max_cross_validation_folds": 5
    }
)

external sentence-transformer within spacy pipeline for few-shot

import spacy
import classy_classification

nlp = spacy.blank("en")
nlp.add_pipe("text_categorizer", config={"data": training_data}) #
print(nlp(validation_data[0])._.cats)
print([doc._.cats for doc in nlp.pipe(validation_data)])

external hugginface model within spacy pipeline for zero-shot

import spacy
import classy_classification

nlp = spacy.blank("en")
nlp.add_pipe("text_categorizer", config={"data": training_data, "cat_type": "zero"}) #
print(nlp(validation_data[0])._.cats)
print([doc._.cats for doc in nlp.pipe(validation_data)])

Todo

[ ] look into a way to integrate spacy trf models. [ ] multiple clasifications datasets for a single input e.g. emotions and topic.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

classy-classification-0.3.2.tar.gz (9.3 kB view details)

Uploaded Source

Built Distribution

classy_classification-0.3.2-py3-none-any.whl (13.9 kB view details)

Uploaded Python 3

File details

Details for the file classy-classification-0.3.2.tar.gz.

File metadata

  • Download URL: classy-classification-0.3.2.tar.gz
  • Upload date:
  • Size: 9.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.1.12 CPython/3.8.2 Windows/10

File hashes

Hashes for classy-classification-0.3.2.tar.gz
Algorithm Hash digest
SHA256 f14b40a9aa0f520d7804e4cd8016b94c63b38112dbe922ff3adcd34ecbbf3957
MD5 c188369931239edabbc95c483d5ae672
BLAKE2b-256 40a04792b2009cbbdff49754690edbec21b1b8f5030239643a4d672c00e865d0

See more details on using hashes here.

File details

Details for the file classy_classification-0.3.2-py3-none-any.whl.

File metadata

File hashes

Hashes for classy_classification-0.3.2-py3-none-any.whl
Algorithm Hash digest
SHA256 4a832fa58a993eccadef32af9d3ea1f111e50e59b977a63c726e337b8de03b58
MD5 4ba5dd4d21577d12ae3007c6df878746
BLAKE2b-256 e7c83d19feb3646f07c246458100f98a12fbbb54bfd470bd2624516d9773df86

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page