Coder Social home page Coder Social logo

rootviii / proxy_requests Goto Github PK

View Code? Open in Web Editor NEW
387.0 19.0 43.0 768 KB

a class that uses scraped proxies to make http GET/POST requests (Python requests)

License: MIT License

Python 100.00%
python requests-module requests proxy proxy-server proxy-list webscraping webscraper webscraper-api recursion-problem

proxy_requests's Introduction

Python Proxy Requests | make an http GET/POST with a proxy scraped from https://www.sslproxies.org/

Downloads Downloads Downloads

pypi.org: https://pypi.org/project/proxy-requests/

The ProxyRequests class first scrapes proxies from the web. Then it recursively attempts to make a request if the initial request with a proxy is unsuccessful.

Either copy the code and put where you want it, or download via pip:

pip install proxy-requests (or pip3)
from proxy_requests import ProxyRequests

or if you need the Basic Auth subclass as well:
from proxy_requests import ProxyRequests, ProxyRequestsBasicAuth

If the above import statement is used, method calls will be identical to the ones shown below. Pass a fully qualified URL when initializing an instance.

System Requirements: Python 3 and the requests module.

Runs on Linux and Windows (and Mac probably) - It may take a moment to run depending on the current proxy.
Each request with a proxy is set with an 3 second timeout in the event that the request takes too long (before trying the next proxy socket in the queue).

Proxies are randomly popped from the queue.

The ProxyRequestBasicAuth subclass has the methods get(), get_with_headers(), post(), post_with_headers(), post_file(), and post_file_with_headers() that will override the Parent methods.

GET:

    
r = ProxyRequests('https://api.ipify.org')
r.get()                                                                                                        
                                                                                                        

GET with headers:

                                                                                                          
                                                                                                         
h = {'User-Agent': 'NCSA Mosaic/3.0 (Windows 95)'}                                                             
r = ProxyRequests('url here')                                                                                  
r.set_headers(h)                                                                                               
r.get_with_headers()                                                                                           
                                                                                                        

POST:

                                                                                                          
                                                                                                         
r = ProxyRequests('url here')                                                                                  
r.post({'key1': 'value1', 'key2': 'value2'})                                                                   
                                                                                                        

POST with headers:

    
r = ProxyRequests('url here')
r.set_headers({'name': 'rootVIII', 'secret_message': '7Yufs9KIfj33d'})
r.post_with_headers({'key1': 'value1', 'key2': 'value2'})
    

POST FILE:

    
r = ProxyRequests('url here')
r.set_file('test.txt')
r.post_file()
    

POST FILE with headers:

    
h = {'User-Agent': 'NCSA Mosaic/3.0 (Windows 95)'}
r = ProxyRequests('url here')
r.set_headers(h)
r.set_file('test.txt')
r.post_file_with_headers()
    

GET with Basic Authentication:

    
r = ProxyRequestsBasicAuth('url here', 'username', 'password')
r.get()
    

GET with headers & Basic Authentication:

    
h = {'User-Agent': 'NCSA Mosaic/3.0 (Windows 95)'}
r = ProxyRequestsBasicAuth('url here', 'username', 'password')
r.set_headers(h)
r.get_with_headers()
    

POST with Basic Authentication:

    
r = ProxyRequestsBasicAuth('url here', 'username', 'password')
r.post({'key1': 'value1', 'key2': 'value2'})
    

POST with headers & Basic Authentication:

    
r = ProxyRequestsBasicAuth('url here', 'username', 'password')
r.set_headers({'header_key': 'header_value'})
r.post_with_headers({'key1': 'value1', 'key2': 'value2'})
    

POST FILE with Basic Authentication:

    
r = ProxyRequestsBasicAuth('url here', 'username', 'password')
r.set_file('test.txt')
r.post_file()
    

POST FILE with headers & Basic Authentication:

    
h = {'User-Agent': 'NCSA Mosaic/3.0 (Windows 95)'}
r = ProxyRequestsBasicAuth('url here', 'username', 'password')
r.set_headers(h)
r.set_file('test.txt')
r.post_file_with_headers()
    



Response Methods

Returns a string:
print(r)
Or if you want the raw content as bytes:
r.get_raw()
Get the response as JSON (if valid JSON):
r.get_json()
Get the response headers:
print(r.get_headers())
Get the status code:
print(r.get_status_code())
Get the URL that was requested:
print(r.get_url())
Get the proxy that was used to make the request:
print(r.get_proxy_used())

To write raw data to a file (including an image):

    

url = 'https://www.restwords.com/static/ICON.png'
r = ProxyRequests(url)
r.get()
with open('out.png', 'wb') as f:
    f.write(r.get_raw())

    

Dump the response to a file as JSON:
    
import json
with open('test.txt', 'w') as f:
    json.dump(r.get_json(), f)
    


This was developed on Ubuntu 16.04.4/18.04 LTS.
Author: rootVIII 2018-2020

example1


example1


proxy_requests's People

Contributors

rootviii avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

proxy_requests's Issues

request get

Pass extra data to request get

for example

data = {'key': 1 , "key2":2}
r = ProxyRequests(url)
r.set_headers(headers)
r.get_with_headers()
r.get(data)

[Feature request]: get_with_headers

It is crucial to have UserAgent headers for some sites to work. I spend an hour debugging and trying to find what's wrong and it turned out that there is no get method with headers support

    def get_with_headers(self):
        if len(self.sockets) > 0:
            current_socket = self.sockets.pop(0)
            proxies = {"http": "http://" + current_socket, "https": "https://" + current_socket}
            try:
                request = requests.get(self.url, timeout=3.0, proxies=proxies, headers=self.headers)
                self.request = request.text
                self.headers = request.headers
                self.status_code = request.status_code
                self.proxy_used = current_socket
            except:
                print('working...')
                self.get_with_headers()

Proxy is always the same

I've noticed that if I 10 times call ProxyRequest I get always the same first proxy from the list. Is it possible to get a random proxy from the list?

responsed url

Hi,
I am missing the possibility to get the responsed url like response.request.url in Python’s Requests Library. I can't find this info in headers or html.text.

Checking for Cloudflare Captcha

Thanks for providing this great script! I was about to make something similar before finding this.

I had two things I noticed when using the code; ReadTimeout was not in the list of errors
and it could be a good idea to run a check if the returned data is not a cloudflare captcha check

I've added this bit of code to on my side and seems to do the job well;
(Added to the set_request_data function)
if "why_captcha_headline" in req.text:
raise CloudFlareCaptcha
then created a CloudFlareCaptcha exception class, and added it to the list of errors.

When this error is hit, it should hit the exception in each of the get/post functions, and then re-try

proxies are repeating

is there a way to use every proxy once?
my code is like:

`
from proxy_requests import ProxyRequests

while True:
r = ProxyRequests("Censored")
r.post({"Censored"})
print(r)
print(r.get_status_code())
print(r.get_proxy_used())`

and output is like this:

`
91.92.80.25:40487

"CENSORED"

200

117.206.83.26:41960

"CENSORED"

400

117.206.83.26:41960

"CENSORED"

400

117.206.83.26:41960

"CENSORED"`

Not Working Python 3.9

Not Working On Recent Python Version 3.9
It Do Request then reply with error
Proxy Pool Empty
and dont use no proxy no return source no json

Reduced size of sockets list

Rather a question than an issue: I have read that you have reduced the size of sockets to be parsed from sslproxies. I'm curios why as I first thought that the regex was faulty.

HTTP proxies in the list

There are proxies in the list which are http only, and I am assuming they get skipped.

It's better to ignore the http proxies or add support for them so that they are not skipped.

not working

I am trying to scrap a site which is blocked by my isp.

my code
from proxy_requests import ProxyRequests

r = ProxyRequests(url)
r.get()

output
Unable to make proxied request... Please check the validity of https://httpbin.org/ip

KeyError: 'request'

Python 3.9.1 (default, Dec 13 2020, 11:55:53)

from proxy_requests import ProxyRequests
url = "https://api.ipify.org/"
r = ProxyRequests(url)
print(r)
--------------------------------------------------
KeyError         Traceback (most recent call last)
<ipython-input-9-c6b8a6a9416b> in <module>
----> 1 print(r)

~/.local/lib/python3.9/site-packages/proxy_requests/proxy_requests.py in __str__(self)
    202 
    203     def __str__(self):
--> 204         return str(self.rdata['request'])
    205 
    206 

KeyError: 'request'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.