Coder Social home page Coder Social logo

twittersearch's Introduction

TwitterSearch

Build Status Coverage Code Health PyPi version MIT License Documentation

This library allows you easily create a search through the Twitter API without having to know too much about the API details. Based on such a search you can even iterate throughout all tweets reachable via the Twitter Search API. There is an automatic reload of the next pages while using the iteration. TwitterSearch was developed as part of an interdisciplinary project at the Technische Universität München.

Reasons to use TwitterSearch

Well, because it can be quite annoying to always parse the search url together and a minor spelling mistake is sometimes hard to find. Not to mention the pain of getting the next page of the results. Why not centralize this process and concentrate on the more important parts of the project?

More than that, TwitterSearch is:

  • pretty small (around 500 lines of code currently)
  • pretty easy to use, even for beginners
  • pretty good at giving you all available information (including meta information)
  • pretty iterable without any need to manually reload more results from the API
  • pretty wrong values of API arguments are to raise an exception. This is done before the API gets queried and therefore helps to avoid to reach Twitters' limitations by obviously wrong API calls
  • pretty friendly to Python >= 2.7 and Python >= 3.2
  • pretty pretty to look at :)

Installation

TwitterSearch is also available on pypi and therefore can be installed via pip install TwitterSearch or easy_install TwitterSearch. If you'd like to work with bleeding edge versions you're free to clone the devel branch. A manual installation can be done doing by downloading or cloning the repository and running python setup.py install.

Search Twitter

Everybody knows how much work it is to study at a university. So why not take a small shortcut? So in this example we assume we would like to find out how to copy a doctorate thesis in Germany. Let's have a look what the Twitter users have to say about Mr Guttenberg.

from TwitterSearch import *
try:
    tso = TwitterSearchOrder() # create a TwitterSearchOrder object
    tso.set_keywords(['Guttenberg', 'Doktorarbeit']) # let's define all words we would like to have a look for
    tso.set_language('de') # we want to see German tweets only
    tso.set_include_entities(False) # and don't give us all those entity information

    # it's about time to create a TwitterSearch object with our secret tokens
    ts = TwitterSearch(
        consumer_key = 'aaabbb',
        consumer_secret = 'cccddd',
        access_token = '111222',
        access_token_secret = '333444'
     )

     # this is where the fun actually starts :)
    for tweet in ts.search_tweets_iterable(tso):
        print( '@%s tweeted: %s' % ( tweet['user']['screen_name'], tweet['text'] ) )

except TwitterSearchException as e: # take care of all those ugly errors if there are some
    print(e)

The result will be a text looking similar to this one. But as you see unfortunately there is no idea hidden in those tweets how to get your doctorate thesis without any work. Damn it!

@enricozero tweeted: RT @viehdeo: Archiv: Comedy-Video: Oliver Welke parodiert “Mogelbaron” Dr. Guttenbergs Doktorarbeit (Schummel-cum-laude Pla... http://t. ...
@schlagworte tweeted: "Erst letztens habe ich in meiner Doktorarbeit Guttenberg zitiert." Blockflöte des Todes: http://t.co/pCzIn429
@nkoni7 tweeted: Familien sind auch betroffen wenn schlechte Politik gemacht wird. Nicht nur wenn Guttenberg seine Doktorarbeit fälscht ! #absolutemehrheit

Access User Timelines

You're thinking that the global wisdom of Twitter is way too much for your needs? Well, let's query a timeline of a certain user than:

from TwitterSearch import *

try:
    tuo = TwitterUserOrder('NeinQuarterly') # create a TwitterUserOrder

    # it's about time to create TwitterSearch object again
    ts = TwitterSearch(
        consumer_key = 'aaabbb',
        consumer_secret = 'cccddd',
        access_token = '111222',
        access_token_secret = '333444'
    )

    # start asking Twitter about the timeline
    for tweet in ts.search_tweets_iterable(tuo):
        print( '@%s tweeted: %s' % ( tweet['user']['screen_name'], tweet['text'] ) )

except TwitterSearchException as e: # catch all those ugly errors
    print(e)

You may guess the resulting output, but here it is anyway:

@NeinQuarterly tweeted: To make a long story short: Twitter.
@NeinQuarterly tweeted: A German subordinating conjunction walks into a bar. Three hours later it's joined by a verb.
@NeinQuarterly tweeted: Foucault walks into a bar. No one notices.
@NeinQuarterly tweeted: If it's not deleted, probably wasn't worth writing.
@NeinQuarterly tweeted: Trust me: German prepositions aren't laughing with you. They're laughing at you.
@NeinQuarterly tweeted: Another beautiful day for cultural pessimism.
@NeinQuarterly tweeted: Excuse me, sir. Your Zeitgeist has arrived.

Interested in some more details?

If you'd like to get more information about how TwitterSearch works internally and how to use it with all it's possibilities have a look at the latest documentation. A changelog is also available within this repository.

Updating to 1.0.0 and newer

If you're upgrading from a version < 1.0.0 be aware that the API changed! As part of the process to obtain PEP-8 compatibility all methods had to be renamed. The code changes to support the PEP-8 naming scheme are trivial. Just change the old method naming scheme from setKeywords(...) to the new one of set_keywords(...).

Apart from this issue, four other API changes were introduced with version 1.0.0:

  • simplified proxy functionality (no usage of dicts but plain strings as only HTTPS proxies can be supported anyway)
  • simplified geo-code parameter (TwitterSearchOrder.set_geocode(...,metric=True) renamed to set_geocode(...,imperial_metric=True))
  • simplified TwitterSearch.get_statistics() from dict to tuple style ({'queries':<int>, 'tweets':<int>} to (<int>,<int>))
  • additional feature: timelines of users can now be accessed using the new class TwitterUserOrder

In total those changes can be done quickly without browsing the documentation.

If you're unable apply those changes, you might consider using TwitterSearch versions < 1.0.0. Those will stay available through pypi and therefore will be installable in the future using the common installation methods like pip install -I TwitterSearch==0.78.6. Using the release tags is another easy way to navigate through all versions of this library.

License (MIT)

Copyright (C) 2013 Christian Koepp

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

twittersearch's People

Contributors

bitdeli-chef avatar ckoepp avatar episod avatar hwangmoretime avatar igor-shevchenko avatar msardelich avatar ronggui avatar sajam avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

twittersearch's Issues

Collect tweets from 03-20 Aug, 2014 for a particular location

Hello,

I am a PhD student. I am new to python and using TwitterSearch to collect tweets from 03-20 Aug for a particular location (geocode). I am trying to run the following piece of code.

try:

tso = TwitterSearchOrder() 
tso.set_keywords(['protest','protested','protesting','riot','rioted',
                  'rioting',
                  'rally',
                  'rallied',
                  'rallying',
                  'marched',
                  'marching',
                  'strike',
                  'striked',
                  'striking']) 
tso.set_language('en') # we want to see German tweets only
tso.set_geocode (37.00,-92.00,100)
tso.set_include_entities(False) # and don't give us all those entity information
tso.set_since_id (367020906049835008)
tso.set_max_id (501488707195924481)
ts = TwitterSearch(
    consumer_key = '.............',                             # my access credentials 
    consumer_secret = '.........................',
    access_token = '...................',
    access_token_secret = '.......................'
 )

for tweet in ts.search_tweets_iterable(tso):
    user=tweet['user']['screen_name'].encode("ASCII", errors='ignore')
    text=tweet['text'].encode("ASCII", errors='ignore')
    time=tweet['created_at'].encode("ASCII", errors='ignore')
    print ( '@%s tweeted: %s on %s' % ( user, text, time ) + '\n')

except TwitterSearchException as e:
print(e)

However, above piece of code is not returning anything. Please help me in this regard. Can I collect old tweets without any keyword for a particular location?

Thanks in advance.

Regards,
Mohammed

Program randomly freezes

Hello... I'm using your library and I don't know why but my program randomly freezes sometimes. My program is pretty simple and is pretty much just copying the code sample you provide @https://twittersearch.readthedocs.org/en/latest/index.html (actually, your code sample was also freezing when I tried it).

Could it have to do with the version of python I'm using? (2.7.9)
I installed TwitterSearch through pip. I hope its not some deadlock issue.

Here's what I've been running:

from TwitterSearch import *
from time import sleep
try:
    tso = TwitterSearchOrder() # create a TwitterSearchOrder object
    tso.set_keywords(['#vr', '-RT']) # let's define all words we would like to have a look for
    tso.set_language('en') # hell no German, I want English!
    tso.set_include_entities(False) # and don't give us all those entity information

    # it's about time to create a TwitterSearch object with our secret tokens
    ts = TwitterSearch(
        consumer_key = 'xxxx',
        consumer_secret = 'xxxx',
        access_token = 'xxxx',
        access_token_secret = 'xxxx'
     )

    # open file for writing
    text_file = open("#vrtest.txt", "w")

    # check when to stop
    iterations = 0
    max_tweets = 100000

    # callback fucntion used to check if we need to pause the program
    def my_callback_closure(current_ts_instance): # accepts ONE argument: an instance of TwitterSearch
        queries, tweets_seen = current_ts_instance.get_statistics()

        if queries > 0 and (queries % 2) == 0: # trigger delay every other query
            print("\nQueries: " + str(queries) + " now sleeping, 1 minute.\n");
            sleep(60) # sleep for 60 seconds

     # this is where the fun actually starts :)
    for tweet in ts.search_tweets_iterable(tso, callback=my_callback_closure):

        current_line = "%s" % ( tweet['text'] )

        iterations = iterations + 1
        print( "i: " + str(iterations) + " - " + tweet['user']['screen_name'] + " tweeted: " + current_line )

        text_file.write(current_line.encode('utf-8', 'ignore') + "\n")

        # wait 1 second every 10 tweets
        if (iterations % 10 == 0):
            print("\nSleeping 1 second.\n")
            sleep(1)

        if (iterations >= max_tweets):
            break

except TwitterSearchException as e: # take care of all those ugly errors if there are some
    print(e)

finally:
    # close file
    text_file.close()

More bugs in TwitterSearchOrder

Hi, me again. Maybe I really should fork and pull. If I find another bug I will :)

70: self.argument.update( { 'since_id' : '%s' % twid } ) should be
70: self.arguments.update( { 'since_id' : '%s' % twid } )

The same for line 76
76: self.argument.update( { 'max_id' : '%s' % twid } ) should be
76: self.arguments.update( { 'max_id' : '%s' % twid } )

Next time I will fork, promise.

how to set a proxy ip ?

Because of National Great Firewall ,i could not get the response,so how can i use a proxy ip ?

Search for list of emoji

Is there anyway to search for a list of emoji? I am trying to search for all the flag emoji, but I get the error Error 403: ('Forbidden: The request is understood, but', 'it has been refused or access is not allowed')

main.py

import flags
from TwitterSearch import *
import sys
import json

def is_flag_emoji(c):
    return "\U0001F1E6\U0001F1E8" <= c <= "\U0001F1FF\U0001F1FC" or c in ["\U0001F3F4\U000e0067\U000e0062\U000e0065\U000e006e\U000e0067\U000e007f", "\U0001F3F4\U000e0067\U000e0062\U000e0073\U000e0063\U000e0074\U000e007f", "\U0001F3F4\U000e0067\U000e0062\U000e0077\U000e006c\U000e0073\U000e007f"]



i = 0
data = {}

try:
    tso = TwitterSearchOrder() # create a TwitterSearchOrder object
    tso.set_keywords(flags.list) # let's define all words we would like to have a look for
    tso.set_language('en') # we want to see German tweets only
    tso.set_include_entities(False) # and don't give us all those entity information
    tso.set_count(20)

    # it's about time to create a TwitterSearch object with our secret tokens
    ts = TwitterSearch(
        consumer_key = '****',
        consumer_secret = '****',
        access_token = '****',
        access_token_secret = '****'
     )

     # this is where the fun actually starts :)
    for tweet in ts.search_tweets_iterable(tso):
        if i <= 20:
            # print( '@%s tweeted: %s' % ( tweet['user']['screen_name'], tweet['text'] ) )
            data[tweet['user']['screen_name']] = tweet['text']
            i += 1
        else:
            print(data)
            sys.exit(1)

except TwitterSearchException as e: # take care of all those ugly errors if there are some
    print(e)

flags.py

list = ["🇦🇫", "🇦🇽", "🇦🇱", "🇩🇿", "🇦🇸", "🇦🇩", "🇦🇴", "🇦🇮", "🇦🇶", "🇦🇬", "🇦🇷", "🇦🇲", "🇦🇼", "🇦🇨", "🇦🇺", "🇦🇹", "🇦🇿", "🇧🇸", "🇧🇭", "🇧🇩", "🇧🇧", "🇧🇾", "🇧🇪", "🇧🇿", "🇧🇯", "🇧🇲", "🇧🇹", "🇧🇴", "🇧🇦", "🇧🇼", "🇧🇻", "🇧🇷", "🇮🇴", "🇻🇬", "🇧🇳", "🇧🇬", "🇧🇫", "🇧🇮", "🇰🇭", "🇨🇲", "🇨🇦", "🇮🇨", "🇨🇻", "🇧🇶", "🇰🇾", "🇨🇫", "🇪🇦", "🇹🇩", "🇨🇱", "🇨🇳", "🇨🇽", "🇨🇵", "🇨🇨", "🇨🇴", "🇰🇲", "🇨🇬", "🇨🇩", "🇨🇰", "🇨🇷", "🇨🇮", "🇭🇷", "🇨🇺", "🇨🇼", "🇨🇾", "🇨🇿", "🇩🇰", "🇩🇬", "🇩🇯", "🇩🇲", "🇩🇴", "🇪🇨", "🇪🇬", "🇸🇻", "🇬🇶", "🇪🇷", "🇪🇪", "🇪🇹", "🇪🇺", "🇫🇰", "🇫🇴", "🇫🇯", "🇫🇮", "🇫🇷", "🇬🇫", "🇵🇫", "🇹🇫", "🇬🇦", "🇬🇲", "🇬🇪", "🇩🇪", "🇬🇭", "🇬🇮", "🇬🇷", "🇬🇱", "🇬🇩", "🇬🇵", "🇬🇺", "🇬🇹", "🇬🇬", "🇬🇳", "🇬🇼", "🇬🇾", "🇭🇹", "🇭🇲", "🇭🇳", "🇭🇰", "🇭🇺", "🇮🇸", "🇮🇳", "🇮🇩", "🇮🇷", "🇮🇶", "🇮🇪", "🇮🇲", "🇮🇱", "🇮🇹", "🇯🇲", "🇯🇵", "🇯🇪", "🇯🇴", "🇰🇿", "🇰🇪", "🇰🇮", "🇽🇰", "🇰🇼", "🇰🇬", "🇱🇦", "🇱🇻", "🇱🇧", "🇱🇸", "🇱🇷", "🇱🇾", "🇱🇮", "🇱🇹", "🇱🇺", "🇲🇴", "🇲🇰", "🇲🇬", "🇲🇼", "🇲🇾", "🇲🇻", "🇲🇱", "🇲🇹", "🇲🇭", "🇲🇶", "🇲🇷", "🇲🇺", "🇾🇹", "🇲🇽", "🇫🇲", "🇲🇩", "🇲🇨", "🇲🇳", "🇲🇪", "🇲🇸", "🇲🇦", "🇲🇿", "🇲🇲", "🇳🇦", "🇳🇷", "🇳🇵", "🇳🇱", "🇳🇨", "🇳🇿", "🇳🇮", "🇳🇪", "🇳🇬", "🇳🇺", "🇳🇫", "🇲🇵", "🇰🇵", "🇳🇴", "🇴🇲", "🇵🇰", "🇵🇼", "🇵🇸", "🇵🇦", "🇵🇬", "🇵🇾", "🇵🇪", "🇵🇭", "🇵🇳", "🇵🇱", "🇵🇹", "🇵🇷", "🇶🇦", "🇷🇪", "🇷🇴", "🇷🇺", "🇷🇼", "🇼🇸", "🇸🇲", "🇸🇹", "🇸🇦", "🇸🇳", "🇷🇸", "🇸🇨", "🇸🇱", "🇸🇬", "🇸🇽", "🇸🇰", "🇸🇮", "🇸🇧", "🇸🇴", "🇿🇦", "🇬🇸", "🇰🇷", "🇸🇸", "🇪🇸", "🇱🇰", "🇧🇱", "🇸🇭", "🇰🇳", "🇱🇨", "🇲🇫", "🇵🇲", "🇻🇨", "🇸🇩", "🇸🇷", "🇸🇯", "🇸🇿", "🇸🇪", "🇨🇭", "🇸🇾", "🇹🇼", "🇹🇯", "🇹🇿", "🇹🇭", "🇹🇱", "🇹🇬", "🇹🇰", "🇹🇴", "🇹🇹", "🇹🇦", "🇹🇳", "🇹🇷", "🇹🇲", "🇹🇨", "🇹🇻", "🇺🇬", "🇺🇦", "🇦🇪", "🇬🇧", "🏴󠁧󠁢󠁥󠁮󠁧󠁿", "🏴󠁧󠁢󠁳󠁣󠁴󠁿", "🏴󠁧󠁢󠁷󠁬󠁳󠁿", "🇺🇸", "🇺🇾", "🇺🇲", "🇻🇮", "🇺🇿", "🇻🇺", "🇻🇦", "🇻🇪", "🇻🇳", "🇼🇫", "🇪🇭", "🇾🇪", "🇿🇲", "🇿🇼"]

PEP-8

TwitterSearch is great, but do you plan to provide a more PEP-8-ish API?

For instance, here's an example from the README:

from twitter_search import *

tso = TwitterSearchOrder() # create a TwitterSearchOrder object
tso.set_kewords(['Guttenberg', 'Doktorarbeit']) 
tso.set_language('de')
tso.set_count(7) 
tso.set_include_intities(False) 

try:
    ts = TwitterSearch(
        consumer_key = 'aaabbb',
        consumer_secret = 'cccddd',
        access_token = '111222',
        access_token_secret = '333444'
     )

    for tweet in ts.iter_search(tso):
        # ...
except TwitterSearchException as e:
    print(e)

My first time using GIT so if if I need to post this info in a different way let me know.

The
def next(self): was not working correctly it would return all of the items in the list except the last one.

So if I had tweeted the following

whitecowtest does this make sense

whitecowtest so this is love?

whitecowtest HELLO Mother

I would only get the following returned.

whitecowtest so this is love?

whitecowtest HELLO Mother

Below is the correct code for def next(self):

def next(self):
if self.nextTweet < len(self.response['content']['statuses']):
strresponse = self.response['content']['statuses'][self.nextTweet]
self.nextTweet += 1
return strresponse

    try:
        self.searchNextResults()
    except TwitterSearchException:
        raise StopIteration
    if len(self.response['content']['statuses']) != 0:
        self.nextTweet = 0
        return self.response['content']['statuses'][self.nextTweet]
    raise StopIteration

tso.set_count(5)

tso.set_count(5)
is not working on my side:

try:
    tso = TwitterSearchOrder()
    tso.set_keywords(['Lucca'])
    tso.set_count(5)
    tso.set_result_type('recent')
#    tso.set_until(datetime.date(2016, 04, 27))
#    tso.set_until(datetime.date(datetime.now()))


    ts = TwitterSearch(
        consumer_key = 'xxx',
        consumer_secret = 'xxx',
        access_token = 'xxxx,
        access_token_secret = 'xxxx'
     )

    for tweet in ts.search_tweets_iterable(tso):

        print tweet['entities']['media'][0]['media_url_https']

except TwitterSearchException as e: # take care of all those ugly errors if there are some
    print(e)

it's reporting hundred of results.

KeyError: 'search_metadata'

Seem to be having a problem with your Python Lib

Traceback (most recent call last):
File "main.py", line 41, in
for tweet in ts.searchTweetsIterable(tso): # this is where the fun actually starts :)
File "/usr/local/lib/python2.7/dist-packages/TwitterSearch-0.1-py2.7.egg/TwitterSearch/TwitterSearch.py", line 32, in searchTweetsIte
rable
self.searchTweets(order)
File "/usr/local/lib/python2.7/dist-packages/TwitterSearch-0.1-py2.7.egg/TwitterSearch/TwitterSearch.py", line 50, in searchTweets
self.sentSearch(order.createSearchURL())
File "/usr/local/lib/python2.7/dist-packages/TwitterSearch-0.1-py2.7.egg/TwitterSearch/TwitterSearch.py", line 40, in sentSearch
if self.response['content']['search_metadata'].get('next_results'):
KeyError: 'search_metadata'

Is the error I'm getting using the sample code on your read me (seen below):

from TwitterSearch import *
tso = TwitterSearchOrder() # create a TwitterSearchOrder object
tso.setKeywords(['hullcompsci']) # can include multipul searches (e.g. for when GGJ or TTG is on) tso.setKeywords(['Guttenberg', 'Doktorarbeit'])
tso.setCount(100) # please dear Mr Twitter, give us 100 results per page (this is the default value, I know :P)
tso.setLanguage('en')
tso.setIncludeEntities(False) # and don't give us all those entity information (this is a default value too)

ts = TwitterSearch(
consumer_key = 'censored'
consumer_secret = 'censored',
access_token = 'censored',
access_token_secret = 'censored'
)

ts.authenticate()

counter = 0 # just a small counter
for tweet in ts.searchTweetsIterable(tso): # this is where the fun actually starts :)
counter += 1
print '@%s tweeted: %s' % (tweet['user']['screen_name'], tweet['text'])

print '*** Found a total of %i tweets' % counter

Thanks

Bug in TwitterSearch when looking for more tweets

Line 115 of TwitterSearch:

self.__nextMaxID = min(self.__response['content']['statuses'], key=lambda i: i['id'])['id'] - 1

Since I've only just started looking at this I'm essentially following your getting started guide in the README for a keyword search I'm looking at (single search term: 'ECAWA'), and this line is throwing a value error since the argument to min() is an empty sequence.

tso.SetGeocode flagging UK geocode as invalid number

I am trying to constrain the area I search for tweets in the UK, but am receiving an error response from the TwitterSearchOrder.py module.

[code]C:\Users\cjadmin>C:\Users\cjadmin\Desktop\py\search.py
Traceback (most recent call last):
File "C:\Users\cjadmin\Desktop\py\search.py", line 26, in
tso.setGeocode(53.409144,-2.147483,10,'mi') # Set location constraints with
geocode
File "C:\Python27\lib\site-packages\twittersearch-0.78.3-py2.7.egg\TwitterSear
ch\TwitterSearchOrder.py", line 138, in setGeocode
raise TwitterSearchException(1005)
TwitterSearch.TwitterSearchException.TwitterSearchException: Error 1005: Invalid
unit.[/code]

I've tried escaping the minus for the geocode but that also fails.

Are UK codes unsupported?

Codec Error When Installing

I am having a problem installing TwitterSearch using Python 3.4 on Windows 7. "pip install TwitterSearch" returns a codec error:

 return codecs.charmap_decode(input,self.errors,decoding_table)[0]
 UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 3826: character maps to      <undefined>

My full pip.log can be found here: https://gist.github.com/codedthiscode/a662f8223936e48a645d.

I also tried using easy_install but I am getting the same error. A fair amount of Googling has not solved the problem. Any advice?

Search strings with special/punctuation characters cause unexpected exceptions

While using your lib, I've run into the issue that ValueError is thrown if certain characters such as '(', ')', '[', ']', '$', '?', "'" (apostrophe) and TwitterSearch.TwitterSearchException.TwitterSearchException: Error 401: Unauthorized is produced when I use 'test=', 'test=foo' (basically anytime when I use '=' character). Code producing the aforementioned exceptions (CONSUMER_KEY, CONSUMER_SECRET, TOKEN_KEY, TOKEN_SECRET are keys specific to my application and are working):

Python 2.7.5, TwitterSearch 0.78.3

import logging
import traceback
import TwitterSearch

def download_tweets(search_string, language):
"""Returns list of tweets containing <search_string>, should be like 'en' or 'ru' """

tso = TwitterSearch.TwitterSearchOrder()
tso.addKeyword(search_string)
tso.setLanguage(language)
tso.setIncludeEntities(False)

# create a TwitterSearch object with our secret tokens
ts = TwitterSearch.TwitterSearch(
    consumer_key=CONSUMER_KEY,
    consumer_secret=CONSUMER_SECRET,
    access_token=TOKEN_KEY,
    access_token_secret=TOKEN_SECRET
)
try:
    return ts.searchTweetsIterable(tso)

except TwitterSearch.TwitterSearchException as e:
    logging.exception("%s: %s", e.code, e.message)
    logging.exception("Stack trace: %s", traceback.format_exc())
    raise e

download_tweets("test=", "en")
download_tweets("test=foo", "en")
download_tweets("test'", "en")
download_tweets("test$", "en")
download_tweets("test?", "en")
download_tweets("test(", "en")
download_tweets("test)", "en")
download_tweets("test[", "en")
download_tweets("test]", "en")

tso.setGeocode invalid unit error in TwitterSearchorder.py for UK coordinates

I am trying to constrain the area I search for tweets in the UK, but am receiving an error response from the TwitterSearchOrder.py module.

[code]C:\Users\cjadmin>C:\Users\cjadmin\Desktop\py\search.py
Traceback (most recent call last):
File "C:\Users\cjadmin\Desktop\py\search.py", line 26, in
tso.setGeocode(53.409144,-2.147483,10,'mi') # Set location constraints with
geocode
File "C:\Python27\lib\site-packages\twittersearch-0.78.3-py2.7.egg\TwitterSear
ch\TwitterSearchOrder.py", line 138, in setGeocode
raise TwitterSearchException(1005)
TwitterSearch.TwitterSearchException.TwitterSearchException: Error 1005: Invalid
unit.[/code]

I've tried escaping the minus for the geocode but that also fails.

Are UK codes unsupported?

query operators does not work

Hello, I am having troubles using query operators. For example the query '"michelle bachelet"' throws tweets including not only "michelle bachelet" exact phrase, but also tweets containing only "michelle", others containing only "bachelet", and others containing both "michelle" and "bachelet" with different distances between those two words. In general AND, OR and exact phrase queries throws all three types of results.

I will appreciate any help with this issue.

Search stopping because search_metadata.next_results missing

Thanks for this library. Working very well.

This is more of a question on the twitter api I guess, but maybe you've encountered this before.
Every now and again, I find that the search (iterating using searchTweetsIterable) stops because in the twitter response, the search_metadata.next_results item is completely missing. Do you know of a good reason why this is happening? I don't see anything about this in the API documentation. It is also not due to rate limitation.

If I manually run another search with my own max_id populated, I get another set of results, again with the search_metadata.next_results missing.

Truncated Tweets

Hi, any way to set tweet_mode to extended so I can access non-truncated tweets? Thanks.

AIOHTTP

Would there be any way to turn this easily into AIOHTTP instead of requests?

Find all tweets near me?

Hi, I am new to this. I have been able to get a script going using TwitterSearch by passing in a keyword or two, but can I mimic the near me function of the actual twitter search (more) options?

I tried different variations like this:
tso.set_keywords([], or_operator = True) # let's define all words we would like to have a look for tso.set_geocode(138.599,-34.93,10,imperial_metric=True)

Any help would be appreciated.

KeyError: u'\ufeff'

Not sure if this is an error on my end, or something I don't understand.

Background:
I'm using a Ukrainian word list to mine tweets from twitter for research. I have it saved as a cPickle file which I upload and able to print into python without any problems.

Problem:
I receive the following error and can't figure out what is going on to is throwing it. Any help would be appreciated.

Traceback (most recent call last):
  File "<pyshell#29>", line 1, in <module>
    execfile("twittersearchloc.py")
  File "twittersearchloc.py", line 25, in <module>
    for tweet in ts.search_tweets_iterable(tso):
  File "C:\Python27\lib\site-packages\twittersearch-1.0.1-py2.7.egg\TwitterSearch\TwitterSearch.py", line 204, in search_tweets_iterable
    self.search_tweets(order)
  File "C:\Python27\lib\site-packages\twittersearch-1.0.1-py2.7.egg\TwitterSearch\TwitterSearch.py", line 305, in search_tweets
    self._start_url = order.create_search_url()
  File "C:\Python27\lib\site-packages\twittersearch-1.0.1-py2.7.egg\TwitterSearch\TwitterSearchOrder.py", line 232, in create_search_url
    url += '+'.join([quote_plus(i) for i in self.searchterms])
  File "C:\Python27\lib\urllib.py", line 1310, in quote_plus
    return quote(s, safe)
  File "C:\Python27\lib\urllib.py", line 1303, in quote
    return ''.join(map(quoter, s))
KeyError: u'\ufeff'

How I tried to solve the problem:

I figured the non ascii formart was throwing it off, but trying to decode or encode it into different formats didn't work.

Use No Keywords?

I would like to make a query using a geocode argument only. Just give it coordinates, radius, and date range, and have it pull up all tweets in the area. When I try this however, I get the "No Keywords Given" error. Is it possible to make a query with no Keywords in this library?

API Limit informations

Hello,

Is it possible to get request usage ?
I mean the maximum number of requests that can be done using the given credentials together with the number of requests already done.

Thank you

Regards

Exclude retweets and replies

Add methods to TwitterSearchOrder for excluding retweets and replies.

There is currently a work-around for this:
tso.set_keywords(['yourKeywordHere', '-filter:retweets', '-filter:replies'])

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.