Have you every struggled with needing a Spacy TextCategorizer but didn't have the time to train one from scratch? Classy Classification is the way to go! For few-shot classification using sentence-transformers or spaCy models, provide a dictionary with labels and examples, or just provide a list of labels for zero shot-classification with Hugginface zero-shot classifiers.
Project description
Classy Classification
Have you every struggled with needing a Spacy TextCategorizer but didn't have the time to train one from scratch? Classy Classification is the way to go! For few-shot classification using sentence-transformers or spaCy models, provide a dictionary with labels and examples, or just provide a list of labels for zero shot-classification with Hugginface zero-shot classifiers.
Install
pip install classy-classification
Quickstart
import classy_classification
data = {
"furniture": ["This text is about chairs.",
"Couches, benches and televisions.",
"I really need to get a new sofa."],
"kitchen": ["There also exist things like fridges.",
"I hope to be getting a new stove today.",
"Do you also have some ovens."]
}
classification_type = "spacy_few_shot"
# use internal spacy embeddings with a few examples per label
if classification_type == "spacy_few_shot":
nlp = spacy.blank("en")
nlp.add_pipe("text_categorizer",
config={"data": data, "model": "spacy"}
)
# use sentence-transformer embeddings with a few examples per label
elif classification_type == "sentence_transformer_few_shot":
nlp = spacy.blank("en")
nlp.add_pipe("text_categorizer",
config={"data": data, "model": "sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2"}
)
# use zero-shot classification with only a few labels
elif classification_type == "huggingface_zero_shot":
nlp = spacy.blank("en")
nlp.add_pipe("text_categorizer",
config={"data": ["furniture", "kitchen"], "cat_type": "zero", "model": "facebook/bart-large-mnli"}
)
print(nlp(\"I am looking for kitchen appliances.\")._.cats)
# Output:
#
# [{"label": "furniture", "score": 0.21}, {"label": "kitchen", "score": 0.79}]
Credits
Inspiration Drawn From
Huggingface does offer some nice models for few/zero-shot classification, but these are not tailored to multi-lingual approaches. Rasa NLU has a nice approach for this, but its too embedded in their codebase for easy usage outside of Rasa/chatbots. Additionally, it made sense to integrate sentence-transformers and Hugginface zero-shot, instead of default word embeddings. Finally, I decided to integrate with Spacy, since training a custom Spacy TextCategorizer seems like a lot of hassle if you want something quick and dirty.
Or buy me a coffee
More examples
Some quick and dirty training data.
training_data = {
"politics": [
"Putin orders troops into pro-Russian regions of eastern Ukraine.",
"The president decided not to go through with his speech.",
"There is much uncertainty surrounding the coming elections.",
"Democrats are engaged in a ‘new politics of evasion’."
],
"sports": [
"The soccer team lost.",
"The team won by two against zero.",
"I love all sport.",
"The olympics were amazing.",
"Yesterday, the tennis players wrapped up wimbledon."
],
"weather": [
"It is going to be sunny outside.",
"Heavy rainfall and wind during the afternoon.",
"Clear skies in the morning, but mist in the evenening.",
"It is cold during the winter.",
"There is going to be a storm with heavy rainfall."
]
}
validation_data = [
"I am surely talking about politics.",
"Sports is all you need.",
"Weather is amazing."
]
internal spacy word2vec embeddings
import spacy
import classy_classification
nlp = spacy.load("en_core_web_md")
nlp.add_pipe("text_categorizer", config={"data": training_data, "model": "spacy"}) #use internal embeddings from spacy model
print(nlp(validation_data[0])._.cats)
print([doc._.cats for doc in nlp.pipe(validation_data)])
using as an individual sentence-transformer
from classy_classification import classyClassifier
classifier = classyClassifier(data=training_data)
classifier(validation_data[0])
classifier.pipe(validation_data)
# overwrite training data
classifier.set_training_data(data=new_training_data)
# overwrite [embedding model](https://www.sbert.net/docs/pretrained_models.html)
classifier.set_embedding_model(model="paraphrase-MiniLM-L3-v2")
# overwrite SVC config
classifier.set_svc(
config={
"C": [1, 2, 5, 10, 20, 100],
"kernels": ["linear"],
"max_cross_validation_folds": 5
}
)
external sentence-transformer within spacy pipeline for few-shot
import spacy
import classy_classification
nlp = spacy.blank("en")
nlp.add_pipe("text_categorizer", config={"data": training_data}) #
print(nlp(validation_data[0])._.cats)
print([doc._.cats for doc in nlp.pipe(validation_data)])
external hugginface model within spacy pipeline for zero-shot
import spacy
import classy_classification
nlp = spacy.blank("en")
nlp.add_pipe("text_categorizer", config={"data": training_data, "cat_type": "zero"}) #
print(nlp(validation_data[0])._.cats)
print([doc._.cats for doc in nlp.pipe(validation_data)])
Todo
[ ] look into a way to integrate spacy trf models.
[ ] multiple clasifications datasets for a single input e.g. emotions and topic.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file classy-classification-0.3.4.tar.gz
.
File metadata
- Download URL: classy-classification-0.3.4.tar.gz
- Upload date:
- Size: 9.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.1.11 CPython/3.8.2 Windows/10
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | bfeae4f6e7a277173407bf30cb78425f5771e338c36d72fe2c5d14343ed497aa |
|
MD5 | e83a3e84145c80b43ed0ba7cca82bc95 |
|
BLAKE2b-256 | 318ef70a1efee487bad6c4e05fa2d458495f51c0d3bef5c83231917e7ab524bf |
File details
Details for the file classy_classification-0.3.4-py3-none-any.whl
.
File metadata
- Download URL: classy_classification-0.3.4-py3-none-any.whl
- Upload date:
- Size: 14.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.1.11 CPython/3.8.2 Windows/10
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 71716a12b12c5f366b873d6a9644a55c651ccea88cb69f125ff3a628e95ae018 |
|
MD5 | 3194dbe7e77a69f76a02d2a868f0f943 |
|
BLAKE2b-256 | 5eb7873d0a8a8fc0c975c59be78fbcf32b9ffd77b983fa30eda19e93d6304b67 |