Skip to main content

Easy extraction of keywords from search engine results pages (SERPs).

Project description

https://travis-ci.org/Parsely/serpextract.png?branch=master

serpextract provides easy extraction of keywords from search engine results pages (SERPs).

This module is possible in large part to the very hard work of the Matomo team. Specifically, we make extensive use of their list of search engines.

Installation

Latest release on PyPI:

$ pip install serpextract

Usage

Command Line

Command-line usage, returns the engine name and keyword components separated by a comma and enclosed in quotes:

$ serpextract "http://www.google.ca/url?sa=t&rct=j&q=ars%20technica"
"Google","ars technica"

You can also print out a list of all the SearchEngineParsers currently available in your local cache via:

$ serpextract -l

Python

from serpextract import get_parser, extract, is_serp, get_all_query_params

non_serp_url = 'http://arstechnica.com/'
serp_url = ('http://www.google.ca/url?sa=t&rct=j&q=ars%20technica&source=web&cd=1&ved=0CCsQFjAA'
            '&url=http%3A%2F%2Farstechnica.com%2F&ei=pf7RUYvhO4LdyAHf9oGAAw&usg=AFQjCNHA7qjcMXh'
            'j-UX9EqSy26wZNlL9LQ&bvm=bv.48572450,d.aWc')

get_all_query_params()
# ['key', 'text', 'search_for', 'searchTerm', 'qrs', 'keyword', ...]

is_serp(serp_url)
# True
is_serp(non_serp_url)
# False

get_parser(serp_url)
# SearchEngineParser(engine_name='Google', keyword_extractor=['q'], link_macro='search?q={k}', charsets=['utf-8'])
get_parser(non_serp_url)
# None

extract(serp_url)
# ExtractResult(engine_name='Google', keyword=u'ars technica', parser=SearchEngineParser(...))
extract(non_serp_url)
# None

Naive Detection

The list of search engine parsers that Matomo and therefore serpextract uses is far from exhaustive. If you want serpextract to attempt to guess if a given referring URL is a SERP, you can specify use_naive_method=True to serpextract.is_serp or serpextract.extract. By default, the naive method is disabled.

Naive search engine detection tries to find an instance of r'\.?search\.' in the netloc of a URL. If found, serpextract will then try to find a keyword in the query portion of the URL by looking for the following params in order:

_naive_params = ('q', 'query', 'k', 'keyword', 'term',)

If one of these are found, a keyword is extracted and an ExtractResult is constructed as:

ExtractResult(domain, keyword, None)  # No parser, but engine name and keyword
# Not a recognized search engine by serpextract
serp_url = 'http://search.piccshare.com/search.php?cat=web&channel=main&hl=en&q=test'

is_serp(serp_url)
# False

extract(serp_url)
# None

is_serp(serp_url, use_naive_method=True)
# True

extract(serp_url, use_naive_method=True)
# ExtractResult(engine_name=u'piccshare', keyword=u'test', parser=None)

Custom Parsers

In the event that you have a custom search engine that you’d like to track which is not currently supported by Matomo/serpextract, you can create your own instance of serpextract.SearchEngineParser and either pass it explicitly to either serpextract.is_serp or serpextract.extract or add it to the internal list of parsers.

# Create a parser for PiccShare
from serpextract import SearchEngineParser, is_serp, extract

my_parser = SearchEngineParser(u'PiccShare',          # Engine name
                               u'q',                  # Keyword extractor
                               u'/search.php?q={k}',  # Link macro
                               u'utf-8')              # Charset
serp_url = 'http://search.piccshare.com/search.php?cat=web&channel=main&hl=en&q=test'

is_serp(serp_url)
# False

extract(serp_url)
# None

is_serp(serp_url, parser=my_parser)
# True

extract(serp_url, parser=my_parser)
# ExtractResult(engine_name=u'PiccShare', keyword=u'test', parser=SearchEngineParser(engine_name=u'PiccShare', keyword_extractor=[u'q'], link_macro=u'/search.php?q={k}', charsets=[u'utf-8']))

You can also permanently add a custom parser to the internal list of parsers that serpextract maintains so that you no longer have to explicitly pass a parser object to serpextract.is_serp or serpextract.extract.

from serpextract import SearchEngineParser, add_custom_parser, is_serp, extract

my_parser = SearchEngineParser(u'PiccShare',          # Engine name
                               u'q',                  # Keyword extractor
                               u'/search.php?q={k}',  # Link macro
                               u'utf-8')              # Charset
add_custom_parser(u'search.piccshare.com', my_parser)

serp_url = 'http://search.piccshare.com/search.php?cat=web&channel=main&hl=en&q=test'
is_serp(serp_url)
# True

extract(serp_url)
# ExtractResult(engine_name=u'PiccShare', keyword=u'test', parser=SearchEngineParser(engine_name=u'PiccShare', keyword_extractor=[u'q'], link_macro=u'/search.php?q={k}', charsets=[u'utf-8']))

Tests

There are some basic tests for popular search engines, but more are required:

$ pip install -r requirements.txt
$ py.test

Caching

Internally, this module caches an OrderedDict representation of Matomo’s list of search engines which is stored in serpextract/search_engines.pickle. This isn’t intended to change that often and so this module ships with a cached version.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

serpextract-0.7.0.tar.gz (23.5 kB view details)

Uploaded Source

Built Distribution

serpextract-0.7.0-py3-none-any.whl (21.4 kB view details)

Uploaded Python 3

File details

Details for the file serpextract-0.7.0.tar.gz.

File metadata

  • Download URL: serpextract-0.7.0.tar.gz
  • Upload date:
  • Size: 23.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.2.0 pkginfo/1.5.0.1 requests/2.24.0 setuptools/40.6.2 requests-toolbelt/0.9.1 tqdm/4.48.2 CPython/3.6.10

File hashes

Hashes for serpextract-0.7.0.tar.gz
Algorithm Hash digest
SHA256 27d20de78ee7e4056bc0e44c44fbc97745bfbb3244e73d3bfb88daf6f620463e
MD5 b6e7afcc94774881de0656656e258a10
BLAKE2b-256 42cd1677b35904f7469c4019d2d6b0ae8a5f3b6ef72c6b5e83d8ba7a18f65d70

See more details on using hashes here.

Provenance

File details

Details for the file serpextract-0.7.0-py3-none-any.whl.

File metadata

  • Download URL: serpextract-0.7.0-py3-none-any.whl
  • Upload date:
  • Size: 21.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.2.0 pkginfo/1.5.0.1 requests/2.24.0 setuptools/40.6.2 requests-toolbelt/0.9.1 tqdm/4.48.2 CPython/3.6.10

File hashes

Hashes for serpextract-0.7.0-py3-none-any.whl
Algorithm Hash digest
SHA256 90bc034c825f2b9a049c573c6f9edc9fdb892b1806f0813cae5c01c7ad8236ce
MD5 553eddde6b282c2095865bc18d1611ab
BLAKE2b-256 f0f121486bb737347cca64df77575b6c06d25ff65dd4efbf2870cb67df08408e

See more details on using hashes here.

Provenance

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page