An input set generator for R2C
Project description
Input Set Generator
This is the input set generator for the R2C platform.
Installation
To install, simply pip install r2c-inputset-generator
. Then run r2c-isg
to load the shell.
Note: This application caches HTTP requests to the various package registries in the terminal's current directory. Be sure to navigate to an appropriate directory before loading the shell, or use the command set-api --nocache
inside the shell.
Quick Start
Try the following command sequences:
-
Load the top 5,000 pypi projects by downloads in the last 365 days, sort by descending number of downloads, trim to the top 100 most downloaded, download project metadata and all versions, and generate an input set json.
load pypi top5kyear sort "desc download_count" trim 100 get -mv all set-meta -n test -v 1.0 export inputset.json
-
Load all npm projects, sample 100, download the latest versions, and generate an input set json.
load npm allbydependents sample 100 get -v latest set-meta -n test -v 1.0 export inputset.json
-
Load a csv containing github urls and commit hashes, get project metadata and the latest versions, generate an input set json of type GitRepoCommit, remove all versions, and generate an input set json of type GitRepo.
load --columns "url v.commit" github list_of_github_urls_and_commits.csv get -mv latest set-meta -n test -v 1.0 export inputset_1.json trim -v 0 export inputset_2.json
Shell Usage
Input/Output
-
load (OPTIONS) [noreg | github | npm | pypi] [WEBLIST_NAME | FILEPATH.csv]
Generates a dataset from a weblist or a local file. The following weblists are available:- Github: top1kstarred, top1kforked; the top 1,000 most starred or forked repos
- NPM: allbydependents; all packages, sorted from most to fewest dependents count (caution: 1M+ projects... handle with care)
- Pypi: top5kmonth and top5kyear; the top 5,000 most downloaded projects in the last 30/365 days
Options:
-c --columns "string of col names": A space-separated list of column names in a csv. Overrides default columns (name and version), as well as any headers listed in the file (headers in files begin with a '!'). The CSV reader recognizes the following column keywords: name, url, org, v.commit, v.version. All other columns are read in as project or version attributes.
Example usage: --headers "name url downloads v.commit v.date". - Github: top1kstarred, top1kforked; the top 1,000 most starred or forked repos
-
backup (FILEPATH.p)
Backs up the dataset to a pickle file (defaults to ./dataset_name.p). -
restore FILEPATH.p
Restores a dataset from a pickle file. -
import [noreg | github | npm | pypi] FILEPATH.json
Builds a dataset from an R2C input set. -
export (FILEPATH.json)
Exports a dataset to an R2C input set (defaults to ./dataset_name.json).
Data Acquisition
-
get (OPTIONS)
Downloads project and version metadata from Github/NPM/Pypi.Options:
-m --metadata: Gets metadata for all projects.
-v --versions [all | latest]: Gets historical versions for all projects.
Transformation
-
trim (OPTIONS) N
Trims the dataset to n projects or n versions per project.Options
-v --versions: Binary flag; trims on versions instead of projects. -
sample (OPTIONS) N
Samples n projects or n versions per project.Options
-v --versions: Binary flag; sample versions instead of projects. -
sort "[asc, desc] attributes [...]"
Sorts the projects and versions based on a space-separated string of keywords. Valid keywords are:- Any project attributes
- Any version attributes (prepend "v." to the attribute name)
- Any uuids (prepend "uuids." to the uuid name
- Any meta values (prepend "meta." to the meta name
- The words "asc" and "desc"
All values are sorted in ascending order by default. The first keyword in the string is the primary sort key, the next the secondary, and so on.
Example: The string "uuids.name meta.url downloads desc v.version_str v.date" would sort the dataset by ascending project name, url, and download count; and descending version string and date (assuming those keys exist).
Settings
-
set-meta (OPTIONS)
Sets the dataset's metadata.Options:
-n --name NAME: Input set name. Must be set before the dataset can be exported.
-v --version VERSION: Input set version. Must be set before the dataset can be exported.
-d --description DESCRIPTION: Description string.
-r --readme README: Markdown-formatted readme string.
-a --author AUTHOR: Author name; defaults to git user.name.
-e --email EMAIL: Author email; defaults to git user.email. -
set-api (OPTIONS)
--cache_dir CACHE_DIR: The path to the requests cache; defaults to ./.requests_cache.
--cache_timeout DAYS: The number of days before a cached request goes stale.
--nocache: Binary flag; disables request caching for this dataset.
--github_pat GITHUB_PAT: A github personal access token, used to increase the max allowed hourly request rate from 60/hr to 5,000/hr. For instructions on how to obtain a token, see: https://help.github.com/en/articles/creating-a-personal-access-token-for-the-command-line.
Visualization
- show
Converts the dataset to a json file and loads it in the system's native json viewer.
Python Project
You can also import the package into your own project. Just import the Dataset structure, initialize it, and you're good to go!
from r2c_isg.structures import Dataset
ds = Dataset.import_inputset(
'file.csv' ~or~ 'weblist_name',
registry='github' ~or~ 'npm' ~or~ 'pypi',
cache_dir=path/to/cache/dir, # optional; overrides ./.requests_cache
cache_timeout=int(days_in_cache), # optional; overrides 1 week cache timeout
nocache=True, # optional; disables caching
github_pat=your_github_pat # optional; personal access token for github api
)
ds.get_projects_meta()
ds.get_project_versions(historical='all' ~or~ 'latest')
ds.trim(
n,
on_versions=True # optional; defaults to False
)
ds.sample(
n,
on_versions=True # optional; defaults to False
)
ds.sort('string of sort parameters')
ds.update(**{'name': 'you_dataset_name', 'version': 'your_dataset_version'})
ds.export_inputset('your_inputset.json')
Troubleshooting
If you run into any issues, you can run the shell with the --debug
flag enabled to get a full error message. Then reach out to support@ret2.co
with the stack trace and the steps to reproduce the error.
Note: If the issue is related to the "sample" command, be sure to seed the random number generator to ensure reproducibility.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file r2c-inputset-generator-0.2.7.tar.gz
.
File metadata
- Download URL: r2c-inputset-generator-0.2.7.tar.gz
- Upload date:
- Size: 25.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.21.0 setuptools/42.0.2 requests-toolbelt/0.8.0 tqdm/4.36.1 CPython/3.7.6
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 2a1293382847ebba8d6583b8d1eeb2b5a1711d3dd2fa9e612fa4a9c939b4e16e |
|
MD5 | 5d645a659baef817679e20c9768609b9 |
|
BLAKE2b-256 | 30785e8310fab8f449f17e2dd87776bc99edc8e213f82535317dc5524371ddde |
File details
Details for the file r2c_inputset_generator-0.2.7-py3-none-any.whl
.
File metadata
- Download URL: r2c_inputset_generator-0.2.7-py3-none-any.whl
- Upload date:
- Size: 39.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.21.0 setuptools/42.0.2 requests-toolbelt/0.8.0 tqdm/4.36.1 CPython/3.7.6
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 8de60b4d4c1552bdb965184de332ab4225cd88c19998ffb09ed001378fe91662 |
|
MD5 | 957aa1c7e812c98fb81eb080004b6684 |
|
BLAKE2b-256 | 77c3871fe64f83a6cb71758e13f7f41cadbb807cd014738e12daad291ed5c681 |