Skip to main content

This repository contains an easy and intuitive approach to zero and few-shot text classification using sentence-transformers, huggingface or word embeddings within a Spacy pipeline.

Project description

Classy few shot classification

This repository contains an easy and intuitive approach to zero and few-shot text classification using sentence-transformers, huggingface or word embeddings within a Spacy pipeline.

Why?

Huggingface does offer some nice models for few/zero-shot classification, but these are not tailored to multi-lingual approaches. Rasa NLU has a nice approach for this, but its too embedded in their codebase for easy usage outside of Rasa/chatbots. Additionally, it made sense to integrate sentence-transformers and Hugginface zero-shot, instead of default word embeddings. Finally, I decided to integrate with Spacy, since training a custom Spacy TextCategorizer seems like a lot of hassle if you want something quick and dirty.

Install

pip install classy-classification

Quickstart

Take a look at the examples directory. Use data from any language. And choose a model from sentence-transformers or from Hugginface zero-shot.

data = {
    "furniture": ["This text is about chairs.",
               "Couches, benches and televisions.",
               "I really need to get a new sofa."],
    "kitchen": ["There also exist things like fridges.",
                "I hope to be getting a new stove today.",
                "Do you also have some ovens."]
}

import spacy
import classy_classification

nlp = spacy.blank("en")

# using spacy internal embeddings via md or lg model
nlp.add_pipe("text_categorizer", 
    config={"data": data, "model": "spacy"}
) 

# using sentence-transformers
nlp.add_pipe("text_categorizer", 
    config={"data": data, "model": "sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2"}
) 

# using huggingface zero-shot
nlp.add_pipe("text_categorizer", 
    config={"data": list(data.keys()), "cat_type": "zero", "model": "facebook/bart-large-mnli"}
)

nlp("I am looking for kitchen appliances.")._.cats
nlp.pipe(["I am looking for kitchen appliances."])

Credits

Inspiration Drawn From

Or buy me a coffee

"Buy Me A Coffee"

More examples

Some quick and dirty training data.

training_data = {
    "politics": [
        "Putin orders troops into pro-Russian regions of eastern Ukraine.",
        "The president decided not to go through with his speech.",
        "There is much uncertainty surrounding the coming elections.",
        "Democrats are engaged in a ‘new politics of evasion’."
    ],
    "sports": [
        "The soccer team lost.",
        "The team won by two against zero.",
        "I love all sport.",
        "The olympics were amazing.",
        "Yesterday, the tennis players wrapped up wimbledon."
    ],
    "weather": [
        "It is going to be sunny outside.",
        "Heavy rainfall and wind during the afternoon.",
        "Clear skies in the morning, but mist in the evenening.",
        "It is cold during the winter.",
        "There is going to be a storm with heavy rainfall."
    ]
}

validation_data = [
    "I am surely talking about politics.",
    "Sports is all you need.",
    "Weather is amazing."
]

internal spacy word2vec embeddings

import spacy
import classy_classification

nlp = spacy.load("en_core_web_md") 
nlp.add_pipe("text_categorizer", config={"data": training_data, "model": "spacy"}) #use internal embeddings from spacy model
nlp(validation_data[0])._.cats
nlp.pipe(validation_data)

using as an individual sentence-transformer

from classy_classification import classyClassifier

classifier = classyClassifier(data=training_data)
classifier(validation_data[0])
classifier.pipe(validation_data)

# overwrite training data
classifier.set_training_data(data=new_training_data)

# overwrite [embedding model](https://www.sbert.net/docs/pretrained_models.html)
classifier.set_embedding_model(model="paraphrase-MiniLM-L3-v2")

# overwrite SVC config
classifier.set_svc(
    config={                              
        "C": [1, 2, 5, 10, 20, 100],
        "kernels": ["linear"],                              
        "max_cross_validation_folds": 5
    }
)

external sentence-transformer within spacy pipeline for few-shot

import spacy
import classy_classification

nlp = spacy.blank("en")
nlp.add_pipe("text_categorizer", config={"data": training_data}) #
nlp(validation_data[0])._.cats
nlp.pipe(validation_data)

external hugginface model within spacy pipeline for zero-shot

import spacy
import classy_classification

nlp = spacy.blank("en")
nlp.add_pipe("text_categorizer", config={"data": training_data, "cat_type": "zero"}) #
nlp(validation_data[0])._.cats
nlp.pipe(validation_data)

Todo

[ ] look into a way to integrate spacy trf models. [ ] multiple clasifications datasets for a single input e.g. emotions and topic.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

classy-classification-0.3.1.tar.gz (17.9 kB view details)

Uploaded Source

Built Distribution

classy_classification-0.3.1-py3-none-any.whl (15.7 kB view details)

Uploaded Python 3

File details

Details for the file classy-classification-0.3.1.tar.gz.

File metadata

  • Download URL: classy-classification-0.3.1.tar.gz
  • Upload date:
  • Size: 17.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.1.12 CPython/3.8.2 Windows/10

File hashes

Hashes for classy-classification-0.3.1.tar.gz
Algorithm Hash digest
SHA256 246ac9ce2e6dca58987cae001d20db58e05b31b239b6abb97917abac4631303f
MD5 abf00a02d9668d14922d5d68b936ef9a
BLAKE2b-256 ce393c057bb4ae32589477940043c0d5499e8d53255c8182e1cd068c1bd0144d

See more details on using hashes here.

File details

Details for the file classy_classification-0.3.1-py3-none-any.whl.

File metadata

File hashes

Hashes for classy_classification-0.3.1-py3-none-any.whl
Algorithm Hash digest
SHA256 045c52a1ba52dc8092969d7c1fe38130287320896aa4cda25e1b65d8b4ba1b96
MD5 c79c5f844c979999e8b12a5e2f3d39b4
BLAKE2b-256 55b06f919cb2d85137e3f829824b8404bbe299255a67f886a5c54dc960171b6b

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page