Skip to main content

Fuzzy matching utilities for scholarly metadata

Project description

fuzzycat (wip)

Fuzzy matching utilities for fatcat.

https://pypi-hypernode.com/project/fuzzycat/

To install with pip, run:

$ pip install fuzzycat

Photo by Chika Watanabe (CC BY 2.0).

Overview

The fuzzycat library currently works on fatcat database release dumps and can cluster similar release items, that is it can find clusters and can verify match candidates.

For example we can identify:

  • versions of various items (arxiv, figshare, datacite, ...)
  • preprint and published pairs
  • similar items from different sources

TODO

  • take a list of title strings and return match candidates (faster than elasticsearch); e.g. derive a key and find similar keys some cached clusters
  • take a list of title, author documents and return match candidates; e.g. key may depend on title only, but verification can be more precise
  • take a more complete, yet partial document and return match candidates

For this to work, we will need to have cluster from fatcat precomputed and cache. We also might want to have it sorted by key (which is a side effect of clustering) so we can binary search into the cluster file for the above todo items.

Dataset

For development, we worked on a release_export_expanded.json dump (113G/700G zstd/plain, 154,203,375 lines) and with the fatcat API.

The development workflow looked something like the following.

Clustering

Clustering derives sets of similar documents from a fatcat database release dump.

Following algorithms are implemented (or planned):

  • exact title matches (title)
  • normalized title matches (tnorm)
  • NYSIIS encoded title matches (tnysi)
  • extended title normalization (tsandcrawler)

Example running clustering:

$ python -m fuzzycat cluster -t tsandcrawler < data/re.json | zstd -c -T0 > cluster.json.zst

Clustering works in a three step process:

  1. key extraction for each document (choose algorithm)
  2. sorting by keys (via GNU sort)
  3. group by key and write out (itertools.groupby)

Note: For long running processes, this all-or-nothing approach is impractical; e.g. running clustering on the joint references and fatcat dataset (2B records) takes 24h+.

Ideas:

  • make (sorted) key extraction a fast standalone thing

cat data.jsonl | fuzzycat-key --algo X > data.key.tsv

Where data.key group (id, key, blob) or the like. Make this line speed (maybe w/ rust). Need to carry the blob, as we do not want to restrict options.

Verification

Run verification (pairwise double-check of match candidates in a cluster).

$ time zstdcat -T0 sample_cluster.json.zst | python -m fuzzycat verify > sample_verify.txt

real    7m56.713s
user    8m50.703s
sys     0m29.262s

This is a one-pass operation. For processing 150M docs, we very much depend on the documents being on disk in a file (we keep the complete document in the clustering result).

Example results:

3450874 Status.EXACT Reason.TITLE_AUTHOR_MATCH
2619990 Status.STRONG Reason.SLUG_TITLE_AUTHOR_MATCH
2487633 Status.DIFFERENT Reason.YEAR
2434532 Status.EXACT Reason.WORK_ID
2085006 Status.DIFFERENT Reason.CONTRIB_INTERSECTION_EMPTY
1397420 Status.DIFFERENT Reason.SHARED_DOI_PREFIX
1355852 Status.DIFFERENT Reason.RELEASE_TYPE
1290162 Status.AMBIGUOUS Reason.DUMMY
1145511 Status.DIFFERENT Reason.BOOK_CHAPTER
1009657 Status.DIFFERENT Reason.DATASET_DOI
 996503 Status.STRONG Reason.PMID_DOI_PAIR
 868951 Status.EXACT Reason.DATACITE_VERSION
 796216 Status.STRONG Reason.DATACITE_RELATED_ID
 704154 Status.STRONG Reason.FIGSHARE_VERSION
 534963 Status.STRONG Reason.VERSIONED_DOI
 343310 Status.STRONG Reason.TOKENIZED_AUTHORS
 334974 Status.STRONG Reason.JACCARD_AUTHORS
 293835 Status.STRONG Reason.PREPRINT_PUBLISHED
 269366 Status.DIFFERENT Reason.COMPONENT
 263626 Status.DIFFERENT Reason.SUBTITLE
 224021 Status.AMBIGUOUS Reason.SHORT_TITLE
 152990 Status.DIFFERENT Reason.PAGE_COUNT
 133811 Status.AMBIGUOUS Reason.CUSTOM_PREFIX_10_5860_CHOICE_REVIEW
 122600 Status.AMBIGUOUS Reason.CUSTOM_PREFIX_10_7916
  79664 Status.STRONG Reason.CUSTOM_IEEE_ARXIV
  46649 Status.DIFFERENT Reason.CUSTOM_PREFIX_10_14288
  39797 Status.DIFFERENT Reason.JSTOR_ID
  38598 Status.STRONG Reason.CUSTOM_BSI_UNDATED
  18907 Status.STRONG Reason.CUSTOM_BSI_SUBDOC
  15465 Status.EXACT Reason.DOI
  13393 Status.DIFFERENT Reason.CUSTOM_IOP_MA_PATTERN
  10378 Status.DIFFERENT Reason.CONTAINER
   3081 Status.AMBIGUOUS Reason.BLACKLISTED
   2504 Status.AMBIGUOUS Reason.BLACKLISTED_FRAGMENT
   1273 Status.AMBIGUOUS Reason.APPENDIX
   1063 Status.DIFFERENT Reason.TITLE_FILENAME
    104 Status.DIFFERENT Reason.NUM_DIFF
      4 Status.STRONG Reason.ARXIV_VERSION

A full run

Single threaded, 42h.

$ time zstdcat -T0 release_export_expanded.json.zst | \
    TMPDIR=/bigger/tmp python -m fuzzycat cluster --tmpdir /bigger/tmp -t tsandcrawler | \
    zstd -c9 > cluster_tsandcrawler.json.zst
{
  "key_fail": 0,
  "key_ok": 154202433,
  "key_empty": 942,
  "key_denylist": 0,
  "num_clusters": 124321361
}

real    2559m7.880s
user    2605m41.347s
sys     118m38.141s

So, 29881072 (about 20%) docs in the potentially duplicated set. Verification (about 15h w/o parallel):

$ time zstdcat -T0 cluster_tsandcrawler.json.zst | python -m fuzzycat verify | \
    zstd -c9 > cluster_tsandcrawler_verified_3c7378.tsv.zst

...

real    927m28.631s
user    939m32.761s
sys     36m47.602s

Misc

Use cases

  • take a release entity database dump as JSON lines and cluster releases (according to various algorithms)
  • take cluster information and run a verification step (misc algorithms)
  • create a dataset that contains grouping of releases under works
  • command line tools to generate cache keys, e.g. to match reference strings to release titles (this needs some transparent setup, e.g. filling of a cache before ops)

Usage

Release clusters start with release entities json lines.

$ cat data/sample.json | python -m fuzzycat cluster -t title > out.json

Clustering 1M records (single core) takes about 64s (15K docs/s).

$ head -1 out.json
{
  "k": "裏表紙",
  "v": [
    ...
  ]
}

Using GNU parallel to make it faster.

$ cat data/sample.json | parallel -j 8 --pipe --roundrobin python -m fuzzycat.main cluster -t title

Interestingly, the parallel variants detects fewer clusters (because data is split and clusters are searched within each batch). TODO(miku): sort out sharding bug.

Notes on Refs

  • technique from fuzzycat ported in parts to skate - to go from refs and release dataset to a number of clusters, relating references to releases
  • need to verify, but not the references against each other, only refs againt the release

Notes on Performance

While running bulk (1B+) clustering and verification, even with parallel, fuzzycat got slow. The citation graph project therefore contains a reimplementation of fuzzycat.verify and related functions in Go, which in this case is an order of magnitude faster. See: skate.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

fuzzycat-0.1.20.tar.gz (78.4 kB view details)

Uploaded Source

Built Distribution

fuzzycat-0.1.20-py3-none-any.whl (80.3 kB view details)

Uploaded Python 3

File details

Details for the file fuzzycat-0.1.20.tar.gz.

File metadata

  • Download URL: fuzzycat-0.1.20.tar.gz
  • Upload date:
  • Size: 78.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.2.0 pkginfo/1.5.0.1 requests/2.25.0 setuptools/49.2.0 requests-toolbelt/0.9.1 tqdm/4.53.0 CPython/3.7.8

File hashes

Hashes for fuzzycat-0.1.20.tar.gz
Algorithm Hash digest
SHA256 03aa6a8fe752208034cebbd354207683b9c220325fa36079b3d0af2d06a59e76
MD5 a61837443c6e32817a022d5fe16e6f82
BLAKE2b-256 535bd843f98786c24289d509c7d400ef6100072ab245bfc8b10e7c2de60eea82

See more details on using hashes here.

File details

Details for the file fuzzycat-0.1.20-py3-none-any.whl.

File metadata

  • Download URL: fuzzycat-0.1.20-py3-none-any.whl
  • Upload date:
  • Size: 80.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.2.0 pkginfo/1.5.0.1 requests/2.25.0 setuptools/49.2.0 requests-toolbelt/0.9.1 tqdm/4.53.0 CPython/3.7.8

File hashes

Hashes for fuzzycat-0.1.20-py3-none-any.whl
Algorithm Hash digest
SHA256 9af55b7da04ce5502b14460b656e8c8be8ae9fe73ad5665f676f8eb8962ad891
MD5 3aad3d64380b5106662a6f35ec2dcd4c
BLAKE2b-256 d68c7bf2a5d8951de530e3f540c58b5ea5c75f5e2b99769c262def1703ec91e7

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page