A tool for learning vector representations of words and entities from Wikipedia
Project description
Wikipedia2Vec
Wikipedia2Vec is a tool used for obtaining embeddings (or vector representations) of words and entities (i.e., concepts that have corresponding pages in Wikipedia) from Wikipedia. It is developed and maintained by Studio Ousia.
This tool enables you to learn embeddings of words and entities simultaneously, and places similar words and entities close to one another in a continuous vector space. Embeddings can be easily trained by a single command with a publicly available Wikipedia dump as input.
This tool implements the conventional skip-gram model to learn the embeddings of words, and its extension proposed in Yamada et al. (2016) to learn the embeddings of entities. This tool has been used in several state-of-the-art NLP models such as entity linking, named entity recognition, knowledge graph completion, entity relatedness, and question answering.
This tool has been tested on Linux, Windows, and macOS.
An empirical comparison between Wikipedia2Vec and existing embedding tools (i.e., FastText, Gensim, RDF2Vec, and Wiki2vec) is available here.
Documentation and pretrained embeddings for 12 languages (English, Arabic, Chinese, Dutch, French, German, Italian, Japanese, Polish, Portuguese, Russian, and Spanish) are available online at http://wikipedia2vec.github.io/.
Basic Usage
Wikipedia2Vec can be installed via PyPI:
% pip install wikipedia2vec
With this tool, embeddings can be learned by running a train command with a Wikipedia dump as input. For example, the following commands download the latest English Wikipedia dump and learn embeddings from this dump:
% wget https://dumps.wikimedia.org/enwiki/latest/enwiki-latest-pages-articles.xml.bz2
% wikipedia2vec train enwiki-latest-pages-articles.xml.bz2 MODEL_FILE
Then, the learned embeddings are written to MODEL_FILE. Note that this command can take many optional parameters. Please refer to our documentation for further details.
Reference
If you use Wikipedia2Vec in a scientific publication, please cite the following paper:
Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, Yoshiyasu Takefuji, Wikipedia2Vec: An Optimized Tool for Learning Embeddings of Words and Entities from Wikipedia.
@article{yamada2018wikipedia2vec,
title={Wikipedia2Vec: An Optimized Tool for Learning Embeddings of Words and Entities from Wikipedia},
author={Yamada, Ikuya and Asai, Akari and Shindo, Hiroyuki and Takeda, Hideaki and Takefuji, Yoshiyasu},
journal={arXiv preprint 1812.06280},
year={2018}
}
License
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
File details
Details for the file wikipedia2vec-1.0.3.tar.gz
.
File metadata
- Download URL: wikipedia2vec-1.0.3.tar.gz
- Upload date:
- Size: 1.2 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/1.11.0 pkginfo/1.4.2 requests/2.18.4 setuptools/40.6.3 requests-toolbelt/0.8.0 tqdm/4.19.8 CPython/3.6.0
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | d57c4f848259c209bd36248bdd34e82c026bd422d3108f24918018c7fe38453c |
|
MD5 | 603f8e4cba7d216fc2b8697cad661531 |
|
BLAKE2b-256 | 6e697204f36ed86bf89483ea74d78305a996efeba4fe150d64658a5528a4ba35 |