geduldig / twitterapi Goto Github PK
View Code? Open in Web Editor NEWMinimal python wrapper for Twitter's REST and Streaming APIs
Minimal python wrapper for Twitter's REST and Streaming APIs
Great library - I love the simplicity. What do you think about a pep8 cleanup? e.g. spaces not tabs, ~80 column limit etc?
I've had my statuses/sample
stream crash a couple of times with the following traceback:
Traceback (most recent call last):
File "[...]/quac/bin/collect", line 204, in main_real
self.collect()
File "[...]/quac/bin/collect", line 224, in collect
for tweet in self.stream:
File "[...]/lib/python3.4/site-packages/TwitterAPI/TwitterAPI.py", line 203, in __iter__
yield json.loads(item.decode('utf-8'))
File "/usr/lib/python3.4/json/__init__.py", line 318, in loads
return _default_decoder.decode(s)
File "/usr/lib/python3.4/json/decoder.py", line 343, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib/python3.4/json/decoder.py", line 359, in raw_decode
obj, end = self.scan_once(s, idx)
ValueError: Unterminated string starting at: line 1 column 1984 (char 1983)
The first one was after about 68 million tweets; the second I didn't count but it was 4 hours of collection, so a couple million is a decent guess.
I am guessing this is due to malformed data in the stream from Twitter, but I haven't verified this.
My preferred behavior would be to continue with the stream somehow, but I'm not sure best how to achieve this. The problem is that I'm not sure how Some options I came up with are:
next()
explicitly in a try/except block. I can do this on my end, though if that is the recommended approach it might be worth noting in the documentation. It is a little ugly.None
or an error object of some kind on bad data (maybe the exception itself). This makes processing the results of iteration more complex but keeps the simple iteration API.The first two approaches I can do on my end, though they are a little ugly. If this is the recommended approach, then it might be worth noting in the documentation.
Do you have an opinion or suggestions?
Thanks for your work on TwitterAPI.
Hi,
Im using this API and i was testing locations
response = api.request('statuses/filter', {'locations':'-74,40,-73,41'})
print(response.text)
it should get me tweets from NY but im not getting anything, also i dont know if there is a problem because it says nothing
Nice job in the api, i hope you can help me
Hi all,
I'm trying to use the 'statuses/filter' endpoint with a 'follow' parameter. I'm not getting any errors, but I'm also not getting any Tweets..
try:
r = api.request('statuses/filter', {'follow':userID})
for item in r:
#if ('text' in item): print item['text']
print item
except:
print "Error streaming"
Is this endpoint+parameter supported?
Hey guys - trying to get started but getting a 401 error (auth error).
I'm assuming the only thing I can mess up on the hello world
is the incorrect arguments to api = TwitterAPI()
. Or possibly that I'm trying to run these commands locally, but the API key is for an actual domain e.g. "http://mysite.com"
$ python
>>> from TwitterAPI import TwitterAPI
>>> api = TwitterAPI(API key, API Secret, Access Token, Access token secret)
>>> r = api.request('statuses/update', {'status':'This is a tweet!'})
>>> print r.status_code
401
It's possible to stream multiple accounts on the same .py file? I'm having trouble with memory usage running +300 script at the same time.
Traceback (most recent call last):
File "tweet.py", line 1, in
from TwitterAPI import TwitterAPI
File "/usr/lib/python2.7/dist-packages/TwitterAPI/TwitterAPI.py", line 9, in
from requests.exceptions import ConnectionError, ReadTimeout, SSLError
ImportError: cannot import name ReadTimeout
I can't seem to get this to work. Is there something I'm doing wrong?
That.
I know it's a minimal wrapper, but maybe.
Thanks
Is it possible to pass two parameters with a streaming request? I swear I saw something about this in the documentation a while ago but cannot seem to find it again. I'm trying to do:
r = api.request('statuses/filter', {'track': filt, 'locations': location})
From the readme:
r = api.request('statuses/filter', {'locations':'-74,40,-73,41'})
for item in r.get_iterator():
print item
When I try this (changing the print
statement to a function for Python 3), I get an AttributeError
:
'Response' object has no attribute 'get_iterator'
api
has a user context and sending tweets using statuses/update
works.
Sorry, just never had the chance to comment / post the version. 2013 macbook air, newest osx. Let me know if you need anything else. Thanks!
python --version
Python 2.7.9
python server.py
Traceback (most recent call last):
File "server.py", line 8, in <module>
from TwitterAPI import TwitterAPI
File "/Library/Python/2.7/site-packages/TwitterAPI/__init__.py", line 12, in <module>
logging.getLogger(__name__).addHandler(logging.NullHandler())
AttributeError: 'module' object has no attribute 'NullHandler'
I get the following error when trying to import TwitterAPI:
ImportError: No module named requests_oauthlib
Hi, I've asked in stackoverflow.com, but I think maybe it's better by this way.
In the examples there is no one about the ":id" parameter passing. If tried:
data = {'id': 557856389746683905}
response = api.request('statuses/show/:id', data)
and the response is
{'message': 'Sorry, that page does not exist', 'code': 34}
I suppose there is some kind of syntax I don't get, so my question to clarify this.
This is probably a mistake by me.
Have "pip install" (and sudo) the TwitterAPI library, as per your instructions but still getting the error.
Setup: Raspbian, Python 3.x. Not sure what else you might need.
I'm trying to search tweets not written by a particular language and while -lang:en works in my search on twitter.com '-lang':'en' doesn't seem to filter in my api searches. Is there something I should be doing differently?
Hi when I try do this:
r = api.request('friendships/create', {'follow': 'false', 'user_id':userid})
I get this response:
r.text --->{"errors":[{"code":108,"message":"Cannot find specified user."}]}
r. headers ----> {'status': '403 Forbidden', 'x-response-time': '35', 'content-length': '65', 'x-twitter-response-tags': 'BouncerCompliant', 'content-disposition': 'attachment; filename=json.json', 'x-content-type-options': 'nosniff', 'x-connection-hash': '', 'set-cookie': 'lang=en, guest_id=; Domain=.twitter.com; Path=/; Expires=Sat, 01-Apr-2017 20:20:33 UTC', 'strict-transport-security': 'max-age=631138519', 'x-access-level': 'read-write', 'x-tsa-request-body-time': '0', 'expires': 'Tue, 31 Mar 1981 05:00:00 GMT', 'server': 'tsa_b', 'last-modified': 'Thu, 02 Apr 2015 20:20:33 GMT', 'x-xss-protection': '1; mode=block', 'pragma': 'no-cache', 'cache-control': 'no-cache, no-store, must-revalidate, pre-check=0, post-check=0', 'date': 'Thu, 02 Apr 2015 20:20:33 GMT', 'x-transaction': '', 'x-frame-options': 'SAMEORIGIN', 'content-type': 'application/json;charset=utf-8'}
r.response ----> <Response [403]>
Any idea why?
Hello
I want to use this twitter API, to store tweets on my database. I am new to opensource and git hub,
I am using ubuntu 12.04
Please help me for further proceedings.
Thanks
Paramjot Singh
[email protected]
9811977850
hitting this error:
Python 2.7.9 (default, Mar 1 2015, 12:57:24)
[GCC 4.9.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from TwitterAPI import TwitterAPI
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/dist-packages/TwitterAPI/TwitterAPI.py", line 10, in <module>
from requests.packages.urllib3.exceptions import ReadTimeoutError, ProtocolError
ImportError: cannot import name ReadTimeoutError
>>> import TwitterAPI
>>> TwitterAPI.__version__
'2.3.3.1'
>>>
>>> import urllib3
>>> urllib3.__version__
'1.9.1'
>>>
TwitterAPI is installed via pip
Hi
I just installed the latest release of TwitterAPI (2.2.5) in my python3.3 distribution, and tried to use the '/media/upload' rest endpoint, as described in your tiny example 'Post a tweet with a picture' (http://geduldig.github.io/TwitterAPI/examples.html) but got the following message ๐
'File "/home/tlokmane/python3_venv/lib/python3.3/site-packages/TwitterAPI/TwitterAPI.py", line 94, in request
raise Exception('"%s" is not valid endpoint' % resource)
Exception: "media/upload" is not valid endpoint'
Am I wrong or the 'media/upload' endpoint (which replaces the deprecated endpoint 'statuses/update_with_media') is not yet implemented ?
In this case, is the implementation of this endpoint planned for an upcoming release ?
Thanks a lot
Tewfik
This is after importing straight out of a pip install.
Macbook Air Newest OSX, python version 2.7.9. Not in a Virtualenv
Traceback (most recent call last):
File "app.py", line 2, in <module>
from TwitterAPI import TwitterAPI
File "/Library/Python/2.7/site-packages/TwitterAPI/__init__.py", line 12, in <module>
logging.getLogger(__name__).addHandler(logging.NullHandler())
AttributeError: 'module' object has no attribute 'NullHandler'
Hi there,
I'm trying to integrate your twitterAPI with django. I would like to call a function that downloads 200 tweets. When I run the following code as a script, I'm able to break out of the loop after downloading the tweets. When I run it as a django view, I cannot get out of the get_iterator loop. Is there a way to break the iterator reliably? Or iterate until an endpoint?
def tweetmap(request):
stream = api.request('statuses/filter', {'locations':'-122,36,-121,37'})
count = 0
for tweet in stream.get_iterator():
try:
cleaned_tweet = (tweet['text'], tweet['coordinates']['coordinates'])
count += 1
print cleaned_tweet
if count > 200:
return HttpResponse("Complete")
except:
pass
C:\Python33\Lib\site-packages\TwitterAPI>python cli.py -e statuses/filter -p track=zzz -f screen_name text
*** STOPPED 'Request' object has no attribute 'body'
..and
C:\Python33\Lib\site-packages\TwitterAPI>python cli.py -endpoint statuses/update
-parameters status='my tweet'
*** STOPPED need more than 1 value to unpack
help whattodo??
[email protected]
I am getting this error. 'module' object has no attribute 'NullHandler. Using python 2.7. Help?
I cant figure out why I'm getting the following error, any ideas?
File "/root/.local/lib/python2.7/site-packages/TwitterAPI/TwitterAPI.py", line
9, in <module>
from requests.exceptions import ConnectionError, ReadTimeout, SSLError
ImportError: cannot import name ReadTimeout
Hi there,
Is there a way to allow the iterator to run continuously? Can't seem to do so for more than half a day without my script hanging.
Jason
Hi, i have been enjoying this API, im trying some data mining but i got a problem at TwitterRestPager, it gives me
*** STOPPED HTTPSConnectionPool(host='api.twitter.com', port=443): Read timed out.
Is there a way to increase the time out limit? actually i was hoping to get tones of tweets and i only get like 300
Tnks
was looking at wrong API docs
Thanks for all the work you put into this. One thing that I see that could help is adding proxy support. I see it uses requests, but for the life of me I can not figure out how to add that function in the script, it looks like the base code would need to be changed to allow for it.
This is such a simple and easy to use script for the API, thanks for getting the OAuth right!
Rudy
I am using TwitterAPI to consume the statuses/sample
endpoint. With stock TwitterAPI 2.2.5, the following test script fails with IncompleteRead
after around 6000 tweets:
import TwitterAPI
api = TwitterAPI.TwitterAPI('...', '...', '...', '...')
r = api.request('statuses/sample')
i = 0
for item in r:
if (i % 1000 == 0):
print(i, end='', flush=True)
else:
print('.', end='', flush=True)
i += 1
While listening, the script uses 100% of one core. The processor is an i7 at 3.4 GHz, so plenty of CPU available.
Consumption appears to be about 30 tweets per second, but 60 or more are available. Thus, I suspect that Twitter is dropping the connection because the consumer can't keep up.
The problem can be solved by modifying line 197 of TwitterAPI.py
to remove the 1-byte buffer argument from _StreamingIterable.__init__()
:
def __init__(self, response):
self.results = response.iter_lines()
With this modification, TwitterAPI is able to consume the full statuses/sample
stream at around 5% of a core, for a speedup of ~40x.
Looking through prior issues, this seems to revert the fix for issue #20. I am not sure exactly how to reconcile this. However, TwitterAPI is unusable for me without the larger buffer. My current workaround is to monkey-patch the above method so I don't have a dependency on a patched TwitterAPI.
Thanks for your hard work on this library. TwitterAPI was critical to my being able to upgrade to Python 3.
Just wanted to suggest that you change "if remaining == 0" to if "if remaining == None". Throws an error when the count hits "None" otherwise, or at least that's that I think it's doing!
def get_rest_quota(self):
"""Return quota information from the response header of a REST API request."""
remaining, limit, reset = None, None, None
if self.response:
if 'x-rate-limit-remaining' in self.response.headers:
remaining = int(self.response.headers['x-rate-limit-remaining'])
if remaining == 0:
limit = int(self.response.headers['x-rate-limit-limit'])
reset = int(self.response.headers['x-rate-limit-reset'])
reset = datetime.fromtimestamp(reset)
return {'remaining': remaining, 'limit': limit, 'reset': reset}
I'm working on a project which doesn't consist of users, so I'd really like the higher rate limits of application only auth. Is this supported by this? Other than that, it's working really well :)
Hi, I've asked in stackoverflow.com, but I think maybe it's better by this way.
In the examples there is no one about the ":id" parameter passing. If tried:
data = {'id': 557856389746683905}
response = api.request('statuses/show/:id', data)
and the response is
{'message': 'Sorry, that page does not exist', 'code': 34}
I suppose there is some kind of syntax I don't get, so my question to clarify this.
Hi,
I've been using Twitter API 2.2.5 without a problem for the last month or so with the following code:
api = TwitterAPI(api_key, api_secret, auth_type='oAuth2')
However, I just updated to 2.2.7.3 and the same line of code now throws the following error:
Traceback (most recent call last):
File "", line 1, in
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/TwitterAPI/TwitterAPI.py", line 49, in init
proxies=self.proxies)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/TwitterAPI/BearerAuth.py", line 23, in init
self._bearer_token = self._get_access_token()
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/TwitterAPI/BearerAuth.py", line 27, in _get_access_token
REST_SUBDOMAIN,
NameError: name 'REST_SUBDOMAIN' is not defined
are you plannoing to support https://api.twitter.com/1.1/timelines/timeline.json endpoint?
In windows 7 32-bit, My installing step is follow
I Download the .zip file, and extract it
I command "python setup.py build" and "python setup.py install"
After installing
I just to write "from TwitterAPI import TwitterAPI" in .py file
but return " ImportError: No module named TwitterOAuth "
Hi @geduldig
I noticed a in oauth_test.py: 21
oauth = OAuth1(consumer_key, consumer_secret, request_key, request_secret, verifier)
the last parameter is causing a problem giving the following error:
<?xml version="1.0" encoding="UTF-8"?>
<hash>
<error>Required oauth_verifier parameter not provided</error>
<request>/oauth/access_token</request>
</hash>
This function needs to have a named argument for verifier
Therefore the solution to this issue is:
oauth = OAuth1(consumer_key, consumer_secret, request_key, request_secret, verifier=verifier)
Just a friendly FYI
Hi all,
I know this is probably a really simple thing, but i'm relatively new to programming and looking for some help with an error i keep encountering.
This is the error here:
Traceback (most recent call last):
File "C:\Users\Fraser\Desktop\TwitterStream.py", line 19, in
print(item['text'] if 'text' in item else item)
File "C:\Python33\lib\idlelib\PyShell.py", line 1318, in write
return self.shell.write(s, self.tags)
UnicodeEncodeError: 'UCS-2' codec can't encode characters in position 118-118: Non-BMP character not supported in Tk
Any idea what would help with this?
thanks!
Does this only support tweets from NYC? Because when I try other latitudes and longitudes, I get a request failed error
pip install TwitterAPI fails, here is the log
Downloading from URL https://pypi.python.org/packages/source/T/TwitterAPI/TwitterAPI-2.2.7.1.tar.gz#md5=b773499cde73
3e64e47900b5846c96b2 (from https://pypi.python.org/simple/twitterapi/)
Running setup.py (path:/home/laurent/Envs/action_manager/build/TwitterAPI/setup.py) egg_info for package TwitterAPI
Traceback (most recent call last):
File "<string>", line 17, in <module>
File "/home/laurent/Envs/action_manager/build/TwitterAPI/setup.py", line 26, in <module>
long_description=read('README.rst'),
File "/home/laurent/Envs/action_manager/build/TwitterAPI/setup.py", line 10, in read
with io.open(filename, encoding=encoding) as f:
IOError: [Errno 2] No such file or directory: 'README.rst'
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 17, in <module>
File "/home/laurent/Envs/action_manager/build/TwitterAPI/setup.py", line 26, in <module>
long_description=read('README.rst'),
''''
File "/home/laurent/Envs/action_manager/build/TwitterAPI/setup.py", line 10, in read
with io.open(filename, encoding=encoding) as f:
IOError: [Errno 2] No such file or directory: 'README.rst'
If you look inside https://pypi.python.org/packages/source/T/TwitterAPI/TwitterAPI-2.2.7.1.tar.gz, there is no README.rst file.
When using the test script provided in tests/ the following fails with JSON errors.
api.request('statuses/filter', {'locations':'-74,40,-73,41'})
iter = api.get_iterator()
for item in iter:
sys.stdout.write('%s\n' %item['text'])
I'm not sure why this isn't working. The cURL example built on twitter has locations
as a --data parameter. Maybe something in requests?
Can you confirm this works for you?
It would be really nice to have a function to stop the stream when running r = api.request('statuses/filter', {'track': TRACK_TERM})
.
Something like r.stop()
.
I am trying to query multiple parameters (locations and track) against the streaming API, though for some reason it will not take multiple inputs.
Is this unavailable at present or is there particular syntax required?
I'm trying to use
r = api.request('statuses/filter', {'lang':'en'})
to get all tweets from all locations that are in English, but it's returning the following error:
> python testing.py
Traceback (most recent call last):
File "testing.py", line 21, in <module>
for item in r.get_iterator():
File "/Library/Python/2.7/site-packages/TwitterAPI/TwitterAPI.py", line 203, in __iter__
yield json.loads(item.decode('utf-8'))
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/__init__.py", line 338, in loads
return _default_decoder.decode(s)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/decoder.py", line 365, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/decoder.py", line 383, in raw_decode
raise ValueError("No JSON object could be decoded")
ValueError: No JSON object could be decoded
Is there a way to get this with the streaming api or is it just with the search or something?
Also a kind of unrelated question: does the location filter only work with integers? I'd like to search quite a specific location and I can't seem to search for very specific areas without getting errors.
Description: When I try to make the 'list/members/create_all' API request using a list in the 'user_id' param, it fails to send the request with all the elements of the list - only the first one makes it through. Looking at the Requests Python Library, it looks like the param element needs to be appended with '[]' (i.e., 'user_id[]') to signify that the data is a list that needs to be sent. However, when I try to do this in the TwitterAPI request, I get a response back of a missing parameter. Is it possible to add the check for a list in 'user_id'?
Pre-Conditions:
-twitterapi==2.3.3 installed
Steps:
Expected: All user_ids listed in the 'user_id' param are added as members to the list.
Actual: Only the first user_id list in the 'user_id' param is added as a member to the list.
Note: Requests Python Library documentation on passing lists in a param: http://docs.python-requests.org/en/latest/user/quickstart/
FROM DOC:
In order to pass a list of items as a value you must mark the key as referring to a list like string by appending [] to the key:
payload = {'key1': 'value1', 'key2[]': ['value2', 'value3']}
r = requests.get("http://httpbin.org/get", params=payload)
print(r.url)
http://httpbin.org/get?key1=value1&key2%5B%5D=value2&key2%5B%5D=value3
Hi,
I'm trying to setup the TwitterAPI with about 200 keywords and 4500 user IDs, and everything works fine until I reach 1200 userIDs. When adding more accounts I receive a 503 error...
is it possible that even if passing parameters, the request is still made using the GET method?
the code:
opz['track'] = an array of 200 keywords
opz['follow'] = an array of 1100 userIDs
r = self.api.request('statuses/filter', {'track': opz['track'], 'follow': opz['follow']})
response with > 1100 userIDs
STATUS CODE: 503
HEADERS: {'date': 'Mon, 18 Aug 2014 16:38:12 UTC', 'connection': 'close', 'content-length': '31', 'x-connection-hash': '***xyz***'}
response with 1100 userIDS
STATUS CODE: 200
HEADERS: {'transfer-encoding': 'chunked', 'date': 'Mon, 18 Aug 2014 16:40:12 UTC', 'connection': 'close', 'content-type': 'application/json', 'x-connection-hash': '***xyz***'}
Thanks,
Riccardo
Hello,
When trying to install the package via pip, I encoutered the bug related to requests_oauthlib
(which it seems you already fixed: 8ddb8eb)
I tried to install version 2.0.9.1, to match the environment of a colleague, but PyPI only seems to have version 2.1.0: https://pypi.python.org/pypi/TwitterAPI
There don't seem to be any tags here, on the github repo, either.
I hope you agree that having access to older versions would be useful.
Could you give a clue about this?
I'm trying to modify track parameter in statuses/filter endpoint, but the stream continues giving me the data from the original filter.
I tried to delete api var and recreate it, but no luck.
Maybe there is something related to the requests.session(), but I can't imagine what it is.
Thanks in advance.
P.D.: I rewrote the whole code and the issue disappeared. Thank you.
raw_tweet = next(raw_tweets.get_iterator())
fails with "TypeError: RestIterator object is not an iterator". I believe this is because RestIterator doesn't implement a next() method. StreamingIterator is also missing next().
I am using the following pattern to process tweets in real-time:
r = api.request('statuses/filter', {'track': 'mykeyword'})
for item in r.get_iterator():
print item
I am using the latest version of TwitterAPI available on PIP (2.1.9)
Whenever a keyword is tweeted, within about a second the last tweet that matched the filter is printed on the screen. I found a similar report from another python Twitter Streaming API library where they seem to have found a solution: http://stackoverflow.com/questions/10083936/streaming-api-with-tweepy-only-returns-second-last-tweet-and-not-the-immediately
I'll lay out an example of what is happening:
It is almost as if the tweets are received immediately, they push out the last tweet that was received, but that newest tweet itself is not passed to the iterator and it just sits there waiting for something else to come in before it is able to be processed.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.