Skip to main content

A Python package to search for and remove duplicated files in messy datasets

Project description

deduplify

PyPI CI pre-commit.ci status

A Python tool to search for and remove duplicated files in messy datasets.

Table of Contents:


Overview

deduplify is a Python command line tool that will search a directory tree for duplicated files and optionally remove them. It generates an MD5 hash for each file recursively under a target directory and identifies the filepaths that generate unique and duplicated hashes. When deleting duplicated files, it deletes those deepest in the directory tree first leaving the last present.

Installation

deduplify has a minimum Python requirement of v3.7 but has been developed in v3.8.

From PyPI

First, make sure your pip version is up-to-date.

python -m pip install --upgrade pip

Then install deduplify.

pip install deduplify

Manual Installation

Begin by cloning this repository and change into it.

git clone https://github.com/Living-with-machines/deduplify.git
cd deduplify

Now run the setup script. This will install any requirements and the CLI tool into your Python $PATH.

python setup.py install

Usage

deduplify has 3 commands: hash, compare and clean.

Hashing files

The hash command takes a path to a target directory as an argument. It walks the structure of this directory tree and generates MD5 hashes for all files and outputs a database stored as a JSON file, the name of which can be overwritten using the --dbfile [-f] flag.

Each document in the generated database can be described as a dictionary with the following properties:

{
  "filepath": "",     # String. The full path to a given file.
  "hash": "",         # String. The MD5 hash of the given file.
  "duplicate": bool,  # Boolean. Whether this hash is repeated in the database (True) or not (False).
}

Command line usage:

usage: deduplify hash [-h] [-c COUNT] [-v] [-f DBFILE] [--restart] dir

positional arguments:
  dir                   Path to directory to begin search from

optional arguments:
  -h, --help            show this help message and exit
  -c COUNT, --count COUNT
                        Number of threads to parallelise over. Default: 1
  -v, --verbose         Print logging messages to the console
  -f DBFILE, --dbfile DBFILE
                        Destination database for file hashes. Must be a JSON file. Default: file_hashes.json
  --restart             Restart a run of hashing files and skip over files that have already been hashed. Output files containing duplicated and
                        unique filenames must already exist.

Comparing files

The compare command reads in the JSON database generated by running hash, the name of which can be overwritten using the --infile [-f] flag if the data were saved under a different name. The command runs a check to test if the stem of the filepath are equivalent for all paths that generated a given hash. This indicates that the file is a true duplication as since both its name and content match. If they do not match, this implies that the same content is saved under two different filenames. In this scenario, a ValueError is raised and the user is asked to manually investigate these files.

If all the filenames for a given hash match, then the shortest filepath is removed from the list and the rest are returned to be deleted. To delete files, the user needs to run compare with the --purge flag set.

A recommended workflow to ensure that all duplicated files have been removed would be as follows:

deduplify hash target_dir  # First pass at hashing files
deduplify compare --purge  # Delete duplicated files
deduplify hash target_dir  # Second pass at hashing files
deduplify compare          # Compare the filenames again. The code should return nothing to compare

Command line usage:

usage: deduplify compare [-h] [-c COUNT] [-v] [-f INFILE] [--purge]

optional arguments:
  -h, --help            show this help message and exit
  -c COUNT, --count COUNT
                        Number of threads to parallelise over. Default: 1
  -v, --verbose         Print logging messages to the console
  -f INFILE, --infile INFILE
                        Database to analyse. Must be a JSON file. Default: file_hashes.json
  --purge               Deletes duplicated files. Default: False

Cleaning up

After purging duplicated files, the target directory may be left with empty sub-directories. Running the clean command will locate and delete these empty subdirs and remove them.

Command line usage:

usage: deduplify clean [-h] [-c COUNT] [-v] dir

positional arguments:
  dir                   Path to directory to begin search from

optional arguments:
  -h, --help            show this help message and exit
  -c COUNT, --count COUNT
                        Number of threads to parallelise over. Default: 1
  -v, --verbose         Print logging messages to the console

Global arguments

The following flags can be passed to any of the commands of deduplify.

  • --verbose [-v]: The flag will print verbose output to the console, as opposed to saving it to the deduplify.log file.
  • --count [-c]: Some processes within deduplify can be parallelised over multiple threads when working with larger datasets. To do this, include the --count flag with the (integer) number of threads you'd like to parallelise over. This flag will raise an error if requesting more threads than CPUs available on the host machine.

Contributing

Thank you for wanting to contribute to deduplify! :tada: :sparkling_heart: To get you started, please read our Code of Conduct and Contributing Guidelines.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

deduplify-0.3.0.tar.gz (12.1 kB view details)

Uploaded Source

Built Distribution

deduplify-0.3.0-py3-none-any.whl (11.5 kB view details)

Uploaded Python 3

File details

Details for the file deduplify-0.3.0.tar.gz.

File metadata

  • Download URL: deduplify-0.3.0.tar.gz
  • Upload date:
  • Size: 12.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.8.0 pkginfo/1.8.2 readme-renderer/32.0 requests/2.27.1 requests-toolbelt/0.9.1 urllib3/1.26.8 tqdm/4.63.0 importlib-metadata/4.11.2 keyring/23.5.0 rfc3986/2.0.0 colorama/0.4.4 CPython/3.9.10

File hashes

Hashes for deduplify-0.3.0.tar.gz
Algorithm Hash digest
SHA256 ce45d0b552216712cdfb0a2c1779fba3de52a4aff0a74d4b9ca17b2e683daee4
MD5 b9121f565a752c14b8b687e36fb0f3b4
BLAKE2b-256 d302b6d80503885a99b57c1b714a4db54198df3c1e05beebdc8ae4fa873fadfe

See more details on using hashes here.

Provenance

File details

Details for the file deduplify-0.3.0-py3-none-any.whl.

File metadata

  • Download URL: deduplify-0.3.0-py3-none-any.whl
  • Upload date:
  • Size: 11.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.8.0 pkginfo/1.8.2 readme-renderer/32.0 requests/2.27.1 requests-toolbelt/0.9.1 urllib3/1.26.8 tqdm/4.63.0 importlib-metadata/4.11.2 keyring/23.5.0 rfc3986/2.0.0 colorama/0.4.4 CPython/3.9.10

File hashes

Hashes for deduplify-0.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 c2162a92e72f824e647127d344f7278fa5e6eefec245264aadd9076b57bd9a52
MD5 08b4793bd7464f27d1cc9b27adb2e21b
BLAKE2b-256 bbecadcb7c1b21296e1057e555abae047e41d0c3ba8b2fbcbf53ccb9b164d665

See more details on using hashes here.

Provenance

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page