Skip to main content

Twitter stream & search API grabber

Project description

Gazouilloire

A command line tool for long-term tweets collection. Gazouilloire combines two methods to collect tweets from the Twitter API ("search" and "filter") in order to maximize the number of collected tweets, and automatically fills the gaps in the collection in case of connexion errors or reboots.It handles various config options such as:

  • collecting only during specific time periods
  • limiting the collection to some locations
  • resolving redirected urls
  • downloading only certain types of media contents (only photos and no videos, for example)
  • unfolding Twitter conversations

Python >= 3.7 compatible.

HowTo

  • Install gazouilloire

    pip install gazouilloire
    
  • Install Elasticsearch, version 7.X (you can also use Docker for this)

  • Init gazouilloire collection in a specific directory...

    gazouilloire init path/to/collection/directory
    
  • ...or in the current directory

    gazouilloire init
    

a config.json file is created. Open it to configure the collection parameters.

  • Set your Twitter API key and generate the related Access Token

    "twitter": {
       "key": "<Consumer Key (API Key)>xxxxxxxxxxxxxxxxxxxxx",
       "secret": "<Consumer Secret (API Secret)>xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
       "oauth_token": "<Access Token>xxxxxxxxx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
       "oauth_secret": "<Access Token Secret>xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
    }
    
  • Write down the list of desired keywords and @users and/or the list of desired url_pieces as json arrays:

    "keywords": [
        "amour",
        "\"mots successifs\"",
        "@medialab_scpo"
    ],
    "url_pieces": [
        "medialab.sciencespo.fr/fr"
    ],
    

    Some advanced filters can be used in combination with the keywords, such as -undesiredkeyword, filter:links, -filter:media, -filter:retweets, etc. See Twitter API's documentation for more details.

    Avoid using accented characters (Twitter will automatically return both tweets with and without accents, for instance searching "heros" will find both tweets with "heros" and "héros").

    Note that there are three possibilities to filter further:

    • language: in order to collect only tweets written in a specific language : just add "language": "fr" to the config (the language should be written in ISO 639-1 code)

    • geolocation: just add "geolocation": "Paris, France" field to the config with the desired geographical boundaries or give in coordinates of the desired box (for instance [48.70908786918211, 2.1533203125, 49.00274483644453, 2.610626220703125])

    • time_limited_keywords: in order to filter on specific keywords during planned time periods, for instance:

    "time_limited_keywords": {
          "#fosdem": [
              ["2021-01-27 04:30", "2021-01-28 23:30"]
          ]
      },
    
  • Setup extra options:

    • resolve_redirected_links: set to true or false to enable or disable automatic resolution of all links found in tweets (t.co links are always handled, but this allows resolving also all other shorteners like bit.ly).

    • grab_conversations: set to true to activate automatic iterative collection of all tweets to which collected tweets are answering (warning: one should account for the presence of these when processing data, it often results in collecting tweets way out of the collection time period).

    • catchup_past_week: Twitter's free API allows to collect tweet up to 7 days in the past only which gazouilloire does by default, set this option to false to disable this and only collect tweets posted after the collection was started.

    • download_media: set "download_media": {"photo": true, "video": false, "animated_gif": false} to activate automatic downloading of photos posted by users, without videos or gifs (this does not include images from social cards). All fields can also be set to true to download everything. Setup the media_directory field in complement to setup the absolute path where Gazouilloire should store the images and videos on the machine.

    • timezone: adjust the timezone within which tweets timestamps should be computed. Allowed values are proposed on Gazouilloire's startup when setting up an invalid one.

Starting the collection:

Before starting the collection, you should make sure that you will have enough disk space. It takes about 1Go per million tweets collected (without images and other media contents).

You should also plan to restart your collection in a new folder (i.e. open another elasticsearch index) if the current collection exceeds 150 million tweets.

To start the collection:

  • Run with:

    gazouilloire run path/to/collection/directory
    

    or, to run the script in the current directory:

    gazouilloire run
    
  • The tool can also run as daemon with:

    gazouilloire start
    
  • Stop the daemon with :

    gazouilloire stop
    
  • Access the current collection status (running/not running, nomber of collected docs, disk usage, etc.) with

    gazouilloire status
    
  • Gazouilloire stores its current search state in the collection directory. This means that if you restart Gazouilloire in the same directory, it will not search again for tweets that were already collected. If you want a fresh start, you can reset the search state, as well as everything that was saved on disk, with:

    gazouilloire reset
    
  • You can also choose to delete only some elements, e.g. the tweets stored in elasticsearch and the media files:

    gazouilloire reset --only tweets,media
    

    Possible values for the --only argument: tweets,links,logs,piles,search_state,media

  • Data is stored in your ElasticSearch, which you can direcly query. But you can also export it easily in csv format:

    # Export all fields from all tweets:
    gazouilloire export
    # or
    gazou export
    
  • By default, the export command writes in stdout. You can also use the -o option to write into a file:

    gazou export > my_tweets_file.csv
    # is equivalent to
    gazou export -o my_tweets_file.csv
    
  • Other available options:

    # Export a csv of all tweets having a specific word in their text:
    gazou export medialab
    
    # Export a csv of all tweets between 2 dates (the last date is excluded):
    gazou export --since "2021-03-24T12:00" --until "2021-03-24T13:00"
    # or
    gazou export --since "2021-03-24" --until "2021-03-25"
    
    # Export a csv of all tweets having one of many specific words in their text:
    gazou export medialab digitalhumanities datajournalism '#python'
    
    # Export only a selection of columns:
    gazouilloire export --columns/-c id,user_screen_name,local_time,links
    # or
    gazou export --select/-s id,user_screen_name,local_time,links
    # Other example: export only the text of the tweets:
    gazou export -s text
    
    # Exclude tweets from conversations or from quotes (i.e. that do not match the keywords defined in config.json)
    gazou export --exclude-threads
    
    # Exclude retweets from the export
    gazou export --exclude-retweets
    
    # Export all tweets matching a specific Elasticsearch term query, for instance by user name:
    gazou export "{'user_screen_name': 'medialab_ScPo'}"
    
    # Take a csv file with an "id" column and return all tweets matching these ids:
    gazou export --export-tweets-from-file yourfile.csv
    

Troubleshooting

  • Elasticsearch

    • Remember to set the heap size (at 1GB by default) when moving to production. 1GB is fine for indices under 15-20 million tweets, but be sure to set a higher value for heavier corpora.

      Set these values here /etc/elasticsearch/jvm.options (if you use Elasticsearch as a service) or here your_installation_folder/config/jvm.options (if you have a custom installation folder):

      -Xms2g
      -Xmx2g
      

      Here the heap size is set at 2GB (set the values at -Xms5g -Xmx5g if you need 5GB, etc).

    • If you encounter this Elasticsearch error message: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]:

      :arrow_right: Increase the max_map_count value:

      sudo sysctl -w vm.max_map_count=262144
      

      (source)

    • If you get a ClusterBlockException [SERVICE_UNAVAILABLE/1/state not recovered / initialized] when starting Elasticsearch:

      :arrow_right: Check the value of gateway.recover_after_nodes in /etc/elasticsearch/elasticsearch.yml:

      sudo [YOUR TEXT EDITOR] /etc/elasticsearch/elasticsearch.yml
      

      Edit the value of gateway.recover_after_nodes to match your number of nodes (usually 1 - easily checked here : http://host:port/_nodes).

Publications using Gazouilloire

Publications talking about Gazouilloire

Credits & License

Benjamin Ooghe-Tabanou, Jules Farjas, Béatrice Mazoyer & al @ Sciences Po médialab

Read more about Gazouilloire's migration from Python2 & Mongo to Python3 & ElasticSearch in Jules' report.

Discover more of our projects at médialab tools.

This work is supported by DIME-Web, part of DIME-SHS research equipment financed by the EQUIPEX program (ANR-10-EQPX-19-01).

Gazouilloire is a free open source software released under GPL 3.0 license.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

gazouilloire-1.0.0a12.tar.gz (54.3 kB view details)

Uploaded Source

Built Distributions

gazouilloire-1.0.0a12-py3.8.egg (115.5 kB view details)

Uploaded Source

gazouilloire-1.0.0a12-py3-none-any.whl (67.9 kB view details)

Uploaded Python 3

File details

Details for the file gazouilloire-1.0.0a12.tar.gz.

File metadata

  • Download URL: gazouilloire-1.0.0a12.tar.gz
  • Upload date:
  • Size: 54.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/3.10.1 pkginfo/1.7.0 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.60.0 CPython/3.8.2

File hashes

Hashes for gazouilloire-1.0.0a12.tar.gz
Algorithm Hash digest
SHA256 a85e24ac25c5f727e1dc0c4a97bdd1a3d8bb1e74d55036f46d7283148c62dc0a
MD5 adc4353be87dcf166d71baa167fc4556
BLAKE2b-256 b32a498197227cd87e1d9e923d70c643c5526bffac7eace41a915f23e781ad20

See more details on using hashes here.

File details

Details for the file gazouilloire-1.0.0a12-py3.8.egg.

File metadata

  • Download URL: gazouilloire-1.0.0a12-py3.8.egg
  • Upload date:
  • Size: 115.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/3.10.1 pkginfo/1.7.0 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.60.0 CPython/3.8.2

File hashes

Hashes for gazouilloire-1.0.0a12-py3.8.egg
Algorithm Hash digest
SHA256 fe46e336bef2f6ca1a8029e43e5cf733745b74b4f669943738f14f17a308ffef
MD5 42a411dd4779e1b4e866cadf8ee3d530
BLAKE2b-256 db7e5fc318aa434fed11bdc9d4ec94ad6781b50dd5c5b50d37f82c891656cff2

See more details on using hashes here.

File details

Details for the file gazouilloire-1.0.0a12-py3-none-any.whl.

File metadata

  • Download URL: gazouilloire-1.0.0a12-py3-none-any.whl
  • Upload date:
  • Size: 67.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/3.10.1 pkginfo/1.7.0 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.60.0 CPython/3.8.2

File hashes

Hashes for gazouilloire-1.0.0a12-py3-none-any.whl
Algorithm Hash digest
SHA256 e3d81c1ac86e6089889bacd8d7994827fc593efc182241e732d8fefc77d37c9a
MD5 7836c58b450e5d92a1d35a0677d6ba20
BLAKE2b-256 ee3a0ec8735a59e22b4794a77bbd5735e331414b6a3b3a8e9d43899fc9727ac0

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page