Coder Social home page Coder Social logo

psf / requests Goto Github PK

View Code? Open in Web Editor NEW
51.4K 1.3K 9.2K 12.73 MB

A simple, yet elegant, HTTP library.

Home Page: https://requests.readthedocs.io/en/latest/

License: Apache License 2.0

Makefile 0.70% Python 99.30%
python http forhumans requests python-requests client humans cookies

requests's People

Contributors

aless10 avatar brauliovm avatar continuousfunction avatar daftshady avatar davidfischer avatar dependabot[bot] avatar dpursehouse avatar gazpachoking avatar graingert avatar hugovk avatar idan avatar jasongrout avatar jdufresne avatar jerem avatar jgorset avatar kennethreitz avatar kevinburke avatar klimenko avatar kmadac avatar lukasa avatar mgiuca avatar mjpieters avatar monkeython avatar mrtazz avatar nateprewitt avatar schlamar avatar sethmlarson avatar sigmavirus24 avatar slingamn avatar t-8ch avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

requests's Issues

Authorization header missing?

I'm confused why an Authorization header isn't sent on the first request, and seems dependent on a 401 coming back to trigger another request with the Authorization header?

import requests

r = requests.post("http://localhost:8080/", auth=('test', 'test'))
esummers@roentgenium:~$ netcat -l localhost 8080
POST / HTTP/1.1
Accept-Encoding: identity
Content-Length: 0
Host: localhost:8080
Content-Type: application/x-www-form-urlencoded
Connection: close
User-Agent: Python-urllib/2.6

Add support for proxies

It would be nice to be able to provide a dict of proxies to use eg

proxies = {'http': 'http://someproxy.com:8080', 'https': 'http://anotherproxy.com:8443'}
requests.get(url, params={}, headers={}, cookies=None, auth=None, proxies={}, **kwargs)

Fix allow_redirects

Hi,

I have problems with the "allow_redirects" to the Request object. It seems that if you specify "allow_redirects=False" it still follows redirects. I think I've found the source of the problem on line 169 in models.py:

while (
            ('location' in r.headers) and
            ((self.method in ('GET', 'HEAD')) or
            (r.status_code is 303) or
            (self.allow_redirects))
        ):

Shouldt the or (self.allow_redirects) be and (self.allow_redirects)?

Facilitation for non-ASCII-compatible unicode strings

requests is incompatible with unicode strings that contain non-ASCII characters because it relies on urllib to coerce dictionaries into their query string equivalents:

Traceback (most recent call last):
  File "test.py", line 5, in <module>
    print requests.get('http://example.org', params={'foo': u'føø'})
  File "/Library/Python/2.6/site-packages/requests/core.py", line 467, in get
    return request('GET', url, params=params, headers=headers, cookies=cookies, auth=auth)
  File "/Library/Python/2.6/site-packages/requests/core.py", line 451, in request
    auth=kwargs.pop('auth', auth_manager.get_auth(url)))
  File "/Library/Python/2.6/site-packages/requests/core.py", line 76, in __init__
    self._enc_data = urllib.urlencode(data)
  File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib.py", line 1261, in urlencode
    v = quote_plus(str(v))
UnicodeEncodeError: 'ascii' codec can't encode characters in position 1-2: ordinal not in range(128)

This shortcoming has been discussed in the python-dev mailing list, wherein it is agreed that urllib should raise a TypeError if its arguments are not byte strings. This was never implemented, however, and while the exception it raises now is certainly an improvement to the one it produced back then, urllib continues to raise it only if an unicode string cannot be encoded with the system's default encoding (which, in turn, defaults to ASCII).

# This works, because all characters in the word "foo" exist in ASCII
requests.get('http://example.org', params={'foo': u'foo'})

# This doesn't work, because "ø" does not exist in ASCII
requests.get('http://example.org', params={'foo': u'føø'})

# This remedies the issue, but is arguably not the right place to be doing it
requests.get('http://example.org', params={'foo': u'føø'.encode('utf-8')})

It may be outside the scope of this library to facilitate for urllib so that it does not choke on ASCII-incompatible unicode strings, but I would at least consider to either transparently encode unicode strings as UTF-8 or raise a TypeError upon receiving unicode strings as request data or parameters.

Add verbose logging support for debugging

>>> requests.settings.log = True
# Logs all HTTP Traffic to stderr

>>> requests.settings.log = steam_like_object
# Logs all HTTP Traffic to given stream

>>> requests.settings.log = False
# [Default] No HTTP Logging

Parameters are getting out of hand.

Use context manager as an alternative way to make requests?

Currently, settings context manager is a singleton. I suspect this will cause multiprocessing issues.

please support https client certificate authentication

The following Apache config section enforces the kind of auth I'm talking about:

SSLCACertificateFile /root/CA/cacert.pem
SSLVerifyClient require
SSLVerifyDepth 1

Using httplib2 (http://code.google.com/p/httplib2/) this is done as follows:

from httplib2 import Http
http = Http()
http.add_certificate(keyfile,certfile,urlparse(url).netloc)

...which is great, except for the suck of having to parse the netloc.

My wishlist for requests would be:

requests.get(full_url,auth=AuthObject(method='cert',keyfile=keyfile,certfile=certfile))

However, on the subject, it's important for auth types of be used in parallel.
The specific case I have here has both client certificate and basic auth at the same time.
I have no idea how I'd choose to spell that nicely, although httplib2 does it quite nicely:

http = Http()
http.add_credentials(username,password)
http.add_certificate(keyfile,certfile,urlparse(url).netloc)

thanks!

Chris

requests.post should accept strings in addition to dicts

The specification for HTTP POST is that it has a "message body". HTML forms sticks encoded key/values representing form inputs into that message body when submitted. However, this means that currently requests.post can't accept JSON or XML message bodies, which could make it hard to use with some APIs. Therefore, I propose the following:

"""inside of requests.post method"""

if isinstance(data, dict):
    # process as existing code now handles POST
elif isinstance(data, str) or isinstance(data, unicode):
    # Stick into message body
    # Do encoding if that is what the spec demands (I'm not sure)
else:
    # raise InvalidRequestDataFormat

"""Example of implementation"""

import json

import requests

post_dict = {"hello":"world"}
r = request.post(form_url, post_dict)

post_json = json.dumps(post_dict)
r = request.post(json_url, post_json)

Tracking chained redirection

Sometimes when you make a GET request the client follows multiple redirections. It would be cool to keep track of the URL followed and the relative status code in variable like:

>>> import requests
>>>  r = requests.get('http://example.org/first_redir.php')
>>> r.chained_dir
[('/first_redir.php', 302), ('/second_redir.php', 301)]

GET request to PyPI does not work

Hi, requests it is fantastic!
But if I try to retrieve data from http://pypi.python.org/simple/sphinx/ I get the following error:

ValueError: unknown url type: /simple/Sphinx

That is because PyPI redirects me to that url. You should really allow redirections.

rubik

Auth fail for private github repo

I've got a private repo here on github. I want to suck down all the issues. This works in curl (password removed):

$ curl -u "mw44118:XXXXXXXX" https://api.github.com/repos/spothero/SpotHero-Splash-Page/issues
{ 
... real data here
}

And this is what happens when I don't include the -u authentication stuff:

$ curl https://api.github.com/repos/spothero/SpotHero-Splash-Page/issues
{ 
  "message": "Not Found"
}

Then I tried the same thing with requests (again XXXXXX ain't really my password):

>>> requests.get('https://api.github.com/repos/spothero/SpotHero-Splash-Page/issues', auth=('mw44118', 'XXXXXXXX'))
<Response [404]>

I wondered if maybe you don't actually send the auth stuff unless the server replies with a 403, and since github is replying with a 404, the requests code never knows to send the credentials.

I tried reading through the requests code, but didn't see anything that supported that guess.

POST + file + auth

Hi,

I'm trying to send a post request with a file which requires an authentication...

Following the docs I do:

requests.post(url, files={"file" : open("filename")}, auth=my_auth_object)

Unfortunately I'm getting this error:

File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib2.py", line 383, in open
response = self._open(req, data)
File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib2.py", line 401, in _open
'_open', req)
File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib2.py", line 361, in _call_chain
result = func(*args)
File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib2.py", line 1130, in http_open
return self.do_open(httplib.HTTPConnection, req)
File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib2.py", line 1102, in do_open
h.request(req.get_method(), req.get_selector(), req.data, headers)
File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/httplib.py", line 874, in request
self._send_request(method, url, body, headers)
File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/httplib.py", line 914, in _send_request
self.send(body)
File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/httplib.py", line 719, in send
self.sock.sendall(str)
File "/opt/virtualenvs/brainaetic/lib/python2.6/site-packages/eventlet/greenio.py", line 287, in sendall
tail = self.send(data, flags)
File "/opt/virtualenvs/brainaetic/lib/python2.6/site-packages/eventlet/greenio.py", line 265, in send
len_data = len(data)
AttributeError: multipart_yielder instance has no attribute 'len'

Ideas?

P.S: I'f I remove the auth the request doesn't fail but obviously the auth is required in my case.

Headers kwarg overrides Poster headers

If a caller uses both the files and headers kwargs in requests.request, the dictionary provided in headers will override the multipart encoding headers generated by Poster. See line 180 in core.py. Also, I am using v0.3.0

The request will fail with the following error:

ValueError: No Content-Length specified for iterable body

Example:

import requests

files={ "hello.txt.", "/path/to/file/hello.txt." }

# this works
requests.post('http://path/to/server', files=files)

# this does not work
requests.post('http://path/to/server', headers={ 'Accept' : 'application/json' }, files=files }

Requests does not support Internationalized domain name (idna)

The following url http://➡.ws/gggo should redirect to google.com but it doesn't work with requests.

See the following code:

>>> import requests
>>> a = requests.get(u'http://➡.ws/gggo')
Traceback (most recent call last):
  File "<input>", line 1, in <module>
  File "/Users/jerem/dev/resolver/lib/python2.7/site-packages/requests/api.py", line 75, in get
    timeout=timeout, proxies=proxies)
  File "/Users/jerem/dev/resolver/lib/python2.7/site-packages/requests/api.py", line 53, in request
    r.send()
  File "/Users/jerem/dev/resolver/lib/python2.7/site-packages/requests/models.py", line 294, in send
    resp = opener(req)
  File "/usr/local/Cellar/python/2.7.1/lib/python2.7/urllib2.py", line 392, in open
    response = self._open(req, data)
  File "/usr/local/Cellar/python/2.7.1/lib/python2.7/urllib2.py", line 410, in _open
    '_open', req)
  File "/usr/local/Cellar/python/2.7.1/lib/python2.7/urllib2.py", line 370, in _call_chain
    result = func(*args)
  File "/usr/local/Cellar/python/2.7.1/lib/python2.7/urllib2.py", line 1186, in http_open
    return self.do_open(httplib.HTTPConnection, req)
  File "/usr/local/Cellar/python/2.7.1/lib/python2.7/urllib2.py", line 1155, in do_open
    h.request(req.get_method(), req.get_selector(), req.data, headers)
  File "/usr/local/Cellar/python/2.7.1/lib/python2.7/httplib.py", line 941, in request
    self._send_request(method, url, body, headers)
  File "/usr/local/Cellar/python/2.7.1/lib/python2.7/httplib.py", line 974, in _send_request
    self.putheader(hdr, value)
  File "/usr/local/Cellar/python/2.7.1/lib/python2.7/httplib.py", line 921, in putheader
    hdr = '%s: %s' % (header, '\r\n\t'.join([str(v) for v in values]))
UnicodeEncodeError: 'ascii' codec can't encode character u'\u27a1' in position 0: ordinal not in range(128)
>>> a = requests.get('http://➡.ws/gggo')
>>> a.content
'<html>\n<head>\n\t<title>.WS Internationalized Domain Names</title>\n</head>\n<frameset rows="100%,*" border="0" frameborder="0">\n\t<frame src="http://zoomdns.net" scrol
ling="auto">\n\t<noframes>\n\t\t<p> Your browser does not support frames. Continue to <a href="http://zoomdns.net">http://zoomdns.net</a>.</p>\n\t</noframes>\n</frameset>\
n</html>'
>>> a.url
'http://\xe2\x9e\xa1.ws/gggo'

As you can see, it doesn't redirect to google.com but to a compability page for client that doesn't upport idna.

Cookie support?

An feature request (not found in documentation).

Does this support cookies?

Usecase: I can integrate this module inside an existings framework. This framework generate for me the authentication/session cookie, so to perform request using requests there I need to add the same auth cookie already generated.

v0.5 is backwards incompatible

You may already know and be totally cool with this, but requests v0.5 is backwards incompatible due to this commit.

>>> import requests
>>> requests.__version__
'0.4.0'
>>> # Sends a HTTP GET request to http://example.org with data in the query string
>>> requests.request('GET', 'http://example.org', data={'foo': 'bar'})

>>> import requests
>>> requests.__version__
'0.5.0'
>>> # Sends a HTTP GET request to http://example.org with data in the request body
>>> requests.request('GET', 'http://example.org', data={'foo': 'bar'})

Remove AuthManager

Worst part of the codebase.

New context manager could handle auth.

AuthObjects can have their own optional url includes. They can also have a 'force' mode that would force the use of authentication, in the case of a server 404 instead of 401 for security purposes (github).

allow_redirects doesn't work

I haven't had the opportunity to research this yet, but I thought I'd go ahead and submit it because it might be a while until I can.

>>> import requests
>>> response = requests.get('http://graph.facebook.com/espen.hogbakk/picture', allow_redirects=False)
>>> response.url
'http://profile.ak.fbcdn.net/hprofile-ak-snc4/23219_653735294_3452_q.jpg'

Automatically Handle Server-side Compression

Reproducible example:

import zlib
import requests

r = requests.get('http://api.stackoverflow.com/1.1/users/495995/top-answer-tags')

zlib.decompress(r.content, 16+zlib.MAX_WBITS)

This should already be decompressed because of the response header:

>>> r.headers['content-encoding']
'gip'

Remove Poster

Great project, but an unnecessary dependency. It is also the main barrier to 3.x compatiblity.

Allow querystring parameters with POST, PUT

It would be nice to allow querystring parameters to be supplied for a POST or PUT request.

currently, I have to do something like:

request.post("http://localhost/test?" + urllib.urlencode(dict(foo=bar)), data="body")

I'd like, instead, to be able to do

request.post("http://localhost/test", params=dict(foo=bar), data="body")

Timeouts throw exception from urllib2

Slight grump. When I set a timeout (say requests.get('some url', timeout=1.0)), if the server on the other end doesn't reply in time, I get a urllib2.URLError thrown. Yay that it timed out right, but in order to catch the exception, I have to import urllib2 as well. Any chance there could be a requests exception thrown there instead?

a "40x" error my have content

sometimes, a "40x" header does not mean you don't have content in your response. In fact, that's the 2nd time I'm working on a HTTP client that has to handle content, even if the HTTP return code is different than "200". An example : a webservice that throws a "403" error (forbidden) but outputs some content - in an XML structure, for example, a reason for the error : incorrect password, unauthorized operation, etc. This result is meaningful, and it helps to debug code.

Usually, I have to create a workaround using urllib rather than urllib2 to catch the response body, even if a "40x" error is thrown.

I thought "requests" would handle this, but I've tried to request my small test webservice, and here's the result :

>>> resp = requests.post('http://my-server/webservices/stuff', {'user': 'yada', 'pass': yada'})
>>> print resp
>>> <Response [403]> # this is correct, because user/pass is wrong
>>> print resp.content
None

But in my actual response body, I've got XML (and I've checked).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.