Coder Social home page Coder Social logo

ipproxypool's Introduction

IPProxyPool

IPProxyPool代理池项目,提供代理ip。支持py2和py3两个版本。

我的新书《Python爬虫开发与项目实战》出版了,喜欢的话可以看一下样章


详细使用方式,请看我的博客: http://www.cnblogs.com/qiyeboy/p/5693128.html
最近正在为IPProxyPool添加二级代理,方便调度。大家可以关注我的公众号,更新我会及时通知。

我的微信公众号:


希望大家提供更多的代理网站,现在爬取的好用的代理ip还是太少。
同时感谢super1-chen,fancoo,Leibnizhu对项目的贡献。

项目依赖

Ubuntu,debian

1.安装sqlite数据库(一般系统内置): apt-get install sqlite3
2.安装requests,chardet,web.py,gevent psutil: pip install requests chardet web.py sqlalchemy gevent psutil
3.安装lxml: apt-get install python-lxml
注意:

  • python3下的是pip3
  • 有时候使用的gevent版本过低会出现自动退出情况,请使用pip install gevent --upgrade更新)
  • 在python3中安装web.py,不能使用pip,直接下载py3版本的源码进行安装

Windows

1.下载sqlite,路径添加到环境变量
2.安装requests,chardet,web.py,gevent: pip install requests chardet web.py sqlalchemy gevent
3.安装lxml: pip install lxml或者下载lxml windows版
注意:

  • python3下的是pip3
  • 有时候使用的gevent版本过低会出现自动退出情况,请使用pip install gevent --upgrade更新)
  • 在python3中安装web.py,不能使用pip,直接下载py3版本的源码进行安装

扩展说明

本项目默认数据库是sqlite,但是采用sqlalchemy的ORM模型,通过预留接口可以拓展使用MySQL,MongoDB等数据库。 配置方法:
1.MySQL配置

第一步:首先安装MySQL数据库并启动
第二步:安装MySQLdb或者pymysql(推荐)
第三步:在config.py文件中配置DB_CONFIG。如果安装的是MySQLdb模块,配置如下:
        DB_CONFIG={
            'DB_CONNECT_TYPE':'sqlalchemy',
            'DB_CONNECT_STRING':'mysql+mysqldb://root:root@localhost/proxy?charset=utf8'
        }
        如果安装的是pymysql模块,配置如下:
         DB_CONFIG={
            'DB_CONNECT_TYPE':'sqlalchemy',
            'DB_CONNECT_STRING':'mysql+pymysql://root:root@localhost/proxy?charset=utf8'
        }

sqlalchemy下的DB_CONNECT_STRING参考支持数据库,理论上使用这种配置方式不只是适配MySQL,sqlalchemy支持的数据库都可以,但是仅仅测试过MySQL。
2.MongoDB配置

第一步:首先安装MongoDB数据库并启动
第二步:安装pymongo模块
第三步:在config.py文件中配置DB_CONFIG。配置类似如下:
        DB_CONFIG={
            'DB_CONNECT_TYPE':'pymongo',
            'DB_CONNECT_STRING':'mongodb://localhost:27017/'
        }

由于sqlalchemy并不支持MongoDB,因此额外添加了pymongo模式,DB_CONNECT_STRING参考pymongo的连接字符串。

注意

如果大家想拓展其他数据库,可以直接继承db下ISqlHelper类,实现其中的方法,具体实现参考我的代码,然后在DataStore中导入类即可。

try:
    if DB_CONFIG['DB_CONNECT_TYPE'] == 'pymongo':
        from db.MongoHelper import MongoHelper as SqlHelper
    else:
        from db.SqlHelper import SqlHelper as SqlHelper
    sqlhelper = SqlHelper()
    sqlhelper.init_db()
except Exception,e:
    raise Con_DB_Fail

有感兴趣的朋友,可以将Redis的实现方式添加进来。

如何使用

将项目目录clone到当前文件夹

$ git clone

切换工程目录

$ cd IPProxyPool

运行脚本

python IPProxy.py

成功运行后,打印信息

IPProxyPool----->>>>>>>>beginning
http://0.0.0.0:8000/
IPProxyPool----->>>>>>>>db exists ip:0
IPProxyPool----->>>>>>>>now ip num < MINNUM,start crawling...
IPProxyPool----->>>>>>>>Success ip num :134,Fail ip num:7882

API 使用方法

第一种模式

GET /

这种模式用于查询代理ip数据,同时加入评分机制,返回数据的顺序是按照评分由高到低,速度由快到慢制定的。

参数

Name Type Description
types int 0: 高匿,1:匿名,2 透明
protocol int 0: http, 1 https, 2 http/https
count int 数量
country str 取值为 国内, 国外
area str 地区

例子

IPProxys默认端口为8000,端口可以在config.py中配置。
如果是在本机上测试:

1.获取5个ip地址在**的高匿代理:http://127.0.0.1:8000/?types=0&count=5&country=国内
2.响应为JSON格式,按照评分由高到低,响应速度由高到低的顺序,返回数据:

[["122.226.189.55", 138, 10], ["183.61.236.54", 3128, 10], ["61.132.241.109", 808, 10], ["183.61.236.53", 3128, 10], ["122.227.246.102", 808, 10]]

以["122.226.189.55", 138, 10]为例,第一个元素是ip,第二个元素是port,第三个元素是分值score。
import requests
import json
r = requests.get('http://127.0.0.1:8000/?types=0&count=5&country=国内')
ip_ports = json.loads(r.text)
print ip_ports
ip = ip_ports[0][0]
port = ip_ports[0][1]
proxies={
    'http':'http://%s:%s'%(ip,port),
    'https':'http://%s:%s'%(ip,port)
}
r = requests.get('http://ip.chinaz.com/',proxies=proxies)
r.encoding='utf-8'
print r.text

第二种模式

GET /delete

这种模式用于方便用户根据自己的需求删除代理ip数据

参数

Name Type Description
ip str 类似192.168.1.1
port int 类似 80
types int 0: 高匿,1:匿名,2 透明
protocol int 0: http, 1 https, 2 http/https
count int 数量
country str 取值为 国内, 国外
area str 地区

大家可以根据指定以上一种或几种方式删除数据。

例子

如果是在本机上测试:

1.删除ip为120.92.3.127的代理:http://127.0.0.1:8000/delete?ip=120.92.3.127
2.响应为JSON格式,返回删除的结果为成功,失败或者返回删除的个数,类似如下的效果: ["deleteNum", "ok"]或者["deleteNum", 1]

import requests
r = requests.get('http://127.0.0.1:8000/delete?ip=120.92.3.127')
print r.text

config.py参数配置

#parserList是网址解析规则表,大家可以将发现的代理网址,将提取规则添加到其中,方便爬虫的爬取。
parserList = [
    {
        'urls': ['http://www.66ip.cn/%s.html' % n for n in ['index'] + list(range(2, 12))],
        'type': 'xpath',
        'pattern': ".//*[@id='main']/div/div[1]/table/tr[position()>1]",
        'position': {'ip': './td[1]', 'port': './td[2]', 'type': './td[4]', 'protocol': ''}
    },
    
   ......
 
   
    {
        'urls': ['http://www.cnproxy.com/proxy%s.html' % i for i in range(1, 11)],
        'type': 'module',
        'moduleName': 'CnproxyPraser',
        'pattern': r'<tr><td>(\d+\.\d+\.\d+\.\d+)<SCRIPT type=text/javascript>document.write\(\"\:\"(.+)\)</SCRIPT></td><td>(HTTP|SOCKS4)\s*',
        'position': {'ip': 0, 'port': 1, 'type': -1, 'protocol': 2}
    }
]

#数据库的配置

DB_CONFIG = {

    'DB_CONNECT_TYPE': 'sqlalchemy',  # 'pymongo'sqlalchemy;redis
    # 'DB_CONNECT_STRING':'mongodb://localhost:27017/'
    'DB_CONNECT_STRING': 'sqlite:///' + os.path.dirname(__file__) + '/data/proxy.db'
    # DB_CONNECT_STRING : 'mysql+mysqldb://root:root@localhost/proxy?charset=utf8'

    # 'DB_CONNECT_TYPE': 'redis',  # 'pymongo'sqlalchemy;redis
    # 'DB_CONNECT_STRING': 'redis://localhost:6379/8',

}
#THREADNUM为gevent pool的协程数目
THREADNUM = 5

#API_PORT为API web服务器的端口
API_PORT = 8000

#爬虫爬取和检测ip的设置条件
#不需要检测ip是否已经存在,因为会定时清理
# UPDATE_TIME:每半个小时检测一次是否有代理ip失效
UPDATE_TIME = 30 * 60 

# 当有效的ip值小于MINNUM时 需要启动爬虫进行爬取
MINNUM = 50  

# socket超时
TIMEOUT = 5 




#爬虫下载网页的重试次数
RETRY_TIME = 3


#USER_AGENTS 随机头信息,用来突破爬取网站的反爬虫

USER_AGENTS = [
    "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; AcooBrowser; .NET CLR 1.1.4322; .NET CLR 2.0.50727)",
    "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0; Acoo Browser; SLCC1; .NET CLR 2.0.50727; Media Center PC 5.0; .NET CLR 3.0.04506)",
    "Mozilla/4.0 (compatible; MSIE 7.0; AOL 9.5; AOLBuild 4337.35; Windows NT 5.1; .NET CLR 1.1.4322; .NET CLR 2.0.50727)",
    "Mozilla/5.0 (Windows; U; MSIE 9.0; Windows NT 9.0; en-US)",
    "Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Win64; x64; Trident/5.0; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET CLR 2.0.50727; Media Center PC 6.0)",
    "Mozilla/5.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET CLR 1.0.3705; .NET CLR 1.1.4322)",
    "Mozilla/4.0 (compatible; MSIE 7.0b; Windows NT 5.2; .NET CLR 1.1.4322; .NET CLR 2.0.50727; InfoPath.2; .NET CLR 3.0.04506.30)",
   ]
#默认给抓取的ip分配20分,每次连接失败,减一分,直到分数全部扣完从数据库中删除
DEFAULT_SCORE=10

#CHECK_PROXY变量是为了用户自定义检测代理的函数,,默认是CHECK_PROXY={'function':'checkProxy'}。
#现在使用检测的网址是httpbin.org,但是即使ip通过了验证和检测
#也只能说明通过此代理ip可以到达httpbin.org,但是不一定能到达用户爬取的网址
#因此在这个地方用户可以自己添加检测函数,我以百度为访问网址尝试一下
#大家可以看一下Validator.py文件中的baidu_check函数和detect_proxy函数就会明白

CHECK_PROXY={'function':'checkProxy'}#{'function':'baidu_check'}

TODO

1.添加squid代理,简化爬虫配置

更新进度

-----------------------------2017-4-6----------------------------
1.更新评分机制。

  • 之前的评分机制是刚添加进来每个代理ip为0分,每隔半个小时检测一次,检测之后依然有效则加分,无效则删除。
  • 现在的评分机制是每个新的代理ip分配10分,每隔半个小时检测一次,检测之后依然有效则分数不变,无效则分数减一,直至为0删除,可以避免由于检测网站不稳定导致的误删。

2.用户可以自定义检测函数,在config.py的CHECK_PROXY变量中可以配置。

CHECK_PROXY变量是为了用户自定义检测代理的函数,默认是CHECK_PROXY={'function':'checkProxy'}
现在使用检测的网址是httpbin.org,但是即使ip通过了验证和检测
也只能说明通过此代理ip可以到达httpbin.org,但是不一定能到达用户爬取的网址
因此在这个地方用户可以自己添加检测函数,我以百度为访问网址尝试一下
大家可以看一下Validator.py文件中的baidu_check函数和detect_proxy函数就会明白。

CHECK_PROXY={'function':'baidu_check'}

3.经过大家的共同努力,彻底解决了僵死进程的问题。

-----------------------------2017-1-16----------------------------
1.将py2和py3版本合并,并且兼容
2.修复pymongo查询bug
-----------------------------2017-1-11----------------------------
1.使用httpbin.org检测代理ip的高匿性
2.使用 国内 和 国外 作为country的查询条件
3.修改types和protocol参数,一定要注意protocol的使用,试试访问http://www.baidu.com和https://www.baidu.com
4.美化代码风格
-----------------------------2016-12-11---------------------------- ####大规模重构,主要包括以下几个方面: 1.使用多进程+协程的方式,将爬取和验证的效率提高了50倍以上,可以在几分钟之内获取所有的有效IP
2.使用web.py作为API服务器,重构HTTP接口
3.增加Mysql,MongoDB等数据库的适配
4.增加了三个代理网站
5.增加评分机制,评比稳定的ip
6.支持python3
-----------------------------2016-11-24----------------------------
1.增加chardet识别网页编码
2.突破66ip.cn反爬限制
-----------------------------2016-10-27----------------------------
1.增加对代理的检测,测试是否能真正访问到网址,实现代理
2.添加通过正则表达式和加载插件解析网页的方式
3.又增加一个新的代理网站

-----------------------------2016-7-20----------------------------
1.修复bug ,将数据库进行压缩

ipproxypool's People

Contributors

haowg avatar jarvis4901 avatar leibnizhu avatar light4 avatar petelin avatar qiyeboy avatar sqian3 avatar super1-chen avatar xsren avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ipproxypool's Issues

什么都没改,可能是我哪里配置错了,错误如下

ubuntu@ubuntu:/IPProxyPool$ sudo python IPProxy.py
Traceback (most recent call last):
File "IPProxy.py", line 4, in
from api.apiServer import start_api_server
File "/IPProxyPool/api/apiServer.py", line 9, in
from db.DataStore import sqlhelper
File "/IPProxyPool/db/DataStore.py", line 15, in
raise Con_DB_Fail
util.exception.Con_DB_Fail: <exception str() failed>

web.py源码安装
我不太清楚哪里错了,环境python3.5 是不是我哪里没弄好 邮箱:[email protected]
期望看到你的邮件,先谢谢了

每刷新两次页面就有一次报错,详情如下

Traceback (most recent call last):
File "/Library/Python/2.7/site-packages/web/application.py", line 239, in process
return self.handle()
File "/Library/Python/2.7/site-packages/web/application.py", line 230, in handle
return self._delegate(fn, self.fvars, args)
File "/Library/Python/2.7/site-packages/web/application.py", line 462, in _delegate
return handle_class(cls)
File "/Library/Python/2.7/site-packages/web/application.py", line 438, in handle_class
return tocall(*args)
File "/Users/cdsb/Codes/IPProxyPool/api/apiServer.py", line 27, in GET
json_result = json.dumps(sqlhelper.select(inputs.get('count', None), inputs))
File "/Users/cdsb/Codes/IPProxyPool/db/MongoHelper.py", line 60, in select
result = (item['ip'], item['port'], item['score'])
File "/Library/Python/2.7/site-packages/pymongo/cursor.py", line 1114, in next
if len(self.__data) or self._refresh():
File "/Library/Python/2.7/site-packages/pymongo/cursor.py", line 1036, in _refresh
self.__collation))
File "/Library/Python/2.7/site-packages/pymongo/cursor.py", line 873, in __send_message
**kwargs)
File "/Library/Python/2.7/site-packages/pymongo/mongo_client.py", line 905, in _send_message_with_response
exhaust)
File "/Library/Python/2.7/site-packages/pymongo/mongo_client.py", line 916, in _reset_on_error
return func(*args, **kwargs)
File "/Library/Python/2.7/site-packages/pymongo/server.py", line 136, in send_message_with_response
response_data = sock_info.receive_message(1, request_id)
File "/Library/Python/2.7/site-packages/pymongo/pool.py", line 452, in receive_message
self._raise_connection_failure(error)
File "/Library/Python/2.7/site-packages/pymongo/pool.py", line 552, in _raise_connection_failure
raise error
LoopExit: ('This operation would block forever', <Hub at 0x10b661730 select pending=0 ref=0>)

感觉validator跟查IP的测试网站耦合较强

比如我要抓取豆瓣API,比较务实的验证策略就是保证访问某个API地址时,始终返回200,且数据长度比较合理(因为测试地址是固定的,所以正确的数据长度是可预期的),所以我将
if not r.ok or r.text.find(ip)==-1:
改为
if r.status_code != 200 and len(r.text) > 10000:
本来以为这样就可以了,但运行发现报异常报错,追踪发现校验组件跟一个以固定格式返回IP的网站耦合较强,改起来有些麻烦,所以索性在getMyIp函数输出一个固定值,防止报错。

从代码看,获取本机地址是为了检查代理IP的匿名程度,既然现在暂时未启用这个功能,不妨屏蔽相关代码;并且通常校验代理的可用性是针对想爬取的网站而言的,如果校验组件的运行要求校验地址必须是示例中那种类型的,会削弱校验组件的实用性。

能不能配置选用不同的数据库, 比如支持redis?

sqlite 模块确实是python内置的,但是有些机器上是没有sqlite3模块的(比如我的机器上),redis 是一个更好的选择呀~~,我现在只能改成连接mysql,如果能抽象出来一套操作的话,就可以改成redis了.

米扑和快代理收费版的配置,给有需要的机油们,免造轮

@qiyeboy 已经加了免费版的米扑代理和快代理,这里给一个收费版的配置,有需要的拿去吧(需要输入已购买的订单号)。
P.S. 收费的也不咋的,很多连不上,@qiyeboy 要是觉得有广告嫌疑就删了这issue吧。

parserList = [
        #米扑代理
        {
            'urls': ['http://proxy.mimvp.com/api/fetch.php?orderid=[你的订单号]&num=[一次获取的代理数量]&country_group=1&http_type=1&anonymous=3,5&isp=5&result_fields=1,2'],
            'type':'regular',
            'pattern': "(.+):(.+),(.+)",
            'postion':{'ip': 0, 'port': 1, 'type': '', 'protocol': 2}
        },
        #快代理
        {
            'urls': ['http://dev.kuaidaili.com/api/getproxy/?orderid=[你的订单号]&num=[一次获取的代理数量]&area=%E4%B8%AD%E5%9B%BD&carrier=2&protocol=1&method=1&an_an=1&an_ha=1&f_pr=1&quality=1&sep=2'],
            'type':'regular',
            'pattern': "(.+):(.+),(.+)",
            'postion':{'ip': 0, 'port': 1, 'type': '', 'protocol': 2}
        },
…………
}

接口请求返回空列表

运行程序,接口加参数请求的时候返回空列表,不加参数可以返回完整的ip列表
我配置的是MongoDB数据库

数据存疑

因为我也做了一个,想了解下你抓的是哪几个网站呢,以及,数据量多大,有多少是死的数据啊

ProryIPSite

ip过期问题

请教一个问题哈,由于ip是有时效性的,比如ip过期,有没有机制去清除过期的ip呢?

我尝试安装了,但是接口查不到数据,数据库也是空的

总是会报出:
HTTPConnectionPool(host='123.154.179.204', port=8998): Max retries exceeded with url: http://www.stilllistener.com/checkpoint1/test11/ (Caused by ConnectTimeoutError(<requests.packages.urllib3.connection.HTTPConnection object at 0x10be7a110>, 'Connection to 123.154.179.204 timed out. (connect timeout=5)'))

这类的信息。。是因为代理不可以么?可以我运行二十多分钟了,一个代理都没有???难道是因为都不可用么?

启动程序后访问127.0.0.1:8000导致程序崩溃

IPProxyPool----->>>>>>>>beginning
http://0.0.0.0:8000/
IPProxyPool----->>>>>>>>db exists ip:33
IPProxyPool----->>>>>>>>now ip num < MINNUM,start crawling...
LoopExit('This operation would block forever', <Hub at 0x7f66f9bbfcc0 epoll pending=0 ref=0 fileno=16>)
Traceback (most recent call last):
  File "/home/wpm/venv/ipproxy/lib/python3.5/site-packages/web.py-0.40.dev0-py3.5.egg/web/wsgiserver/wsgiserver3.py", line 1079, in communicate
    req.parse_request()
  File "/home/wpm/venv/ipproxy/lib/python3.5/site-packages/web.py-0.40.dev0-py3.5.egg/web/wsgiserver/wsgiserver3.py", line 602, in parse_request
    success = self.read_request_line()
  File "/home/wpm/venv/ipproxy/lib/python3.5/site-packages/web.py-0.40.dev0-py3.5.egg/web/wsgiserver/wsgiserver3.py", line 635, in read_request_line
    request_line = self.rfile.readline()
  File "/home/wpm/venv/ipproxy/lib/python3.5/site-packages/web.py-0.40.dev0-py3.5.egg/web/wsgiserver/wsgiserver3.py", line 293, in readline
    data = self.rfile.readline(256)
  File "/usr/lib/python3.5/_pyio.py", line 510, in readline
    b = self.read(nreadahead())
  File "/usr/lib/python3.5/_pyio.py", line 494, in nreadahead
    readahead = self.peek(1)
  File "/usr/lib/python3.5/_pyio.py", line 1062, in peek
    return self._peek_unlocked(size)
  File "/usr/lib/python3.5/_pyio.py", line 1069, in _peek_unlocked
    current = self.raw.read(to_read)
  File "/usr/lib/python3.5/socket.py", line 576, in readinto
    return self._sock.recv_into(b)
  File "/home/wpm/venv/ipproxy/lib/python3.5/site-packages/gevent/_socket3.py", line 385, in recv_into
    self._wait(self._read_event)
  File "/home/wpm/venv/ipproxy/lib/python3.5/site-packages/gevent/_socket3.py", line 157, in _wait
    self.hub.wait(watcher)
  File "/home/wpm/venv/ipproxy/lib/python3.5/site-packages/gevent/hub.py", line 651, in wait
    result = waiter.get()
  File "/home/wpm/venv/ipproxy/lib/python3.5/site-packages/gevent/hub.py", line 899, in get
    return self.hub.switch()
  File "/home/wpm/venv/ipproxy/lib/python3.5/site-packages/gevent/hub.py", line 630, in switch
    return RawGreenlet.switch(self)
gevent.hub.LoopExit: ('This operation would block forever', <Hub at 0x7f66f9bbfcc0 epoll pending=0 ref=0 fileno=16>)

编码问题

部署在Ubuntu 14.04 Server版后,第二天开始大量报错:
'ascii' codec can't encode characters in position 0-3: ordinal not in range(128)
查了下应该是编码问题,系统LANG环境变量目前是zh_CN.UTF-8,请问如何解决?

Lost connection to MySQL server during query问题

使用sqlalchemy,pymysql。
后台启动进程,是不是会报数据库链接丢失问题
Traceback (most recent call last): num :40,Fail ip num:0 File "/root/.pyenv/versions/ipproxy-3.5.2/lib/python3.5/site-packages/sqlalchemy/engine/base.py", line 1182, in _execute_context context) File "/root/.pyenv/versions/ipproxy-3.5.2/lib/python3.5/site-packages/sqlalchemy/engine/default.py", line 469, in do_execute cursor.execute(statement, parameters) File "/root/.pyenv/versions/ipproxy-3.5.2/lib/python3.5/site-packages/pymysql/cursors.py", line 166, in execute result = self._query(query) File "/root/.pyenv/versions/ipproxy-3.5.2/lib/python3.5/site-packages/pymysql/cursors.py", line 322, in _query conn.query(q) File "/root/.pyenv/versions/ipproxy-3.5.2/lib/python3.5/site-packages/pymysql/connections.py", line 852, in query self._affected_rows = self._read_query_result(unbuffered=unbuffered) File "/root/.pyenv/versions/ipproxy-3.5.2/lib/python3.5/site-packages/pymysql/connections.py", line 1053, in _read_query_result result.read() File "/root/.pyenv/versions/ipproxy-3.5.2/lib/python3.5/site-packages/pymysql/connections.py", line 1336, in read first_packet = self.connection._read_packet() File "/root/.pyenv/versions/ipproxy-3.5.2/lib/python3.5/site-packages/pymysql/connections.py", line 983, in _read_packet packet_header = self._read_bytes(4) File "/root/.pyenv/versions/ipproxy-3.5.2/lib/python3.5/site-packages/pymysql/connections.py", line 1029, in _read_bytes CR.CR_SERVER_LOST, "Lost connection to MySQL server during query") pymysql.err.OperationalError: (2013, 'Lost connection to MySQL server during query')

我本来也要写一个的,刚好找到这个,先谢了,提两个建议.

1.在configj里增加API_IP,这样可以在启动时绑定到指定IP.
2.增加安全验证,即访问API时的IP白名单和Authorization验证,这样我就可以把端口开到公网,方便各程序之间调用了.

代码也一起给你,这是我在另一个项目里用到的,其中allocIPList是我指定的一个IP白名单数组
#检查IP白名单
if allowIPList and not self.request.remote_ip in allowIPList:
html['status'] = 1
html['result'] = 'deny ip'
self.set_header("server", " server")
self.write(json.dumps(html))
self.finish()
return
else:
pass

下边是auth验证
#检查Auth验证
if base_auth_user:
auth_header = self.request.headers.get('Authorization', None)
if not base_auth_valid(auth_header):
html['status'] = 1
html['result'] = 'Auth Faild'
self.set_header("server", " server")
self.write(json.dumps(html))
self.finish()
return
else:
pass

#Base Auth 验证
def base_auth_valid(auth_header):
from tornado.escape import utf8
from hashlib import md5
# Basic Zm9vOmJhcg==
if not auth_header:
return False

    auth_mode, auth_base64 = auth_header.split(' ', 1)
    assert auth_mode == 'Basic'
    # 'Zm9vOmJhcg==' == base64("foo:bar")
    auth_username, auth_password = auth_base64.decode('base64').split(':', 1)
    if auth_username == base_auth_user and auth_password == base_auth_passwd:
            return True
    else:
            return False

python3 环境使用 pymongo 错误

OS: Windows 10 x64 1607
Python: 3.5.2

config.py 中使用

'DB_CONNECT_TYPE': 'pymongo',  
'DB_CONNECT_STRING': 'mongodb://localhost:27017/'

运行出现异常:

Traceback (most recent call last):
  File "IPProxy.py", line 7, in <module>
    from validator.Validator import validator, getMyIP
  File "<frozen importlib._bootstrap>", line 969, in _find_and_load
  File "<frozen importlib._bootstrap>", line 171, in __exit__
  File "<frozen importlib._bootstrap>", line 123, in release
RuntimeError: cannot release un-acquired lock

使用 sqlalchemy 没有问题。

另外有个建议,如果正在抓取代理时通过接口获取代理地址,控制台输出的抓取进度会被 HTTP 请求的 log 覆盖,希望能保留进度而不是覆盖。

配置到mysql,但好像存不进去??

sqlite的能行,然后想更换到mysql,能够新建表,但是数据存不进去

IPProxyPool----->>>>>>>>beginning
http://0.0.0.0:8000/
IPProxyPool----->>>>>>>>db exists ip:0
IPProxyPool----->>>>>>>>now ip num < MINNUM,start crawling...
IProxyPool----->>>>>>>>Success ip num :0,Fail ip num:2986

后来有试过打印出proxy,显示none

求大神指导!

还有一个小建议

其实对于程序来说,只有两个地区就够了.
国内,国外
在提取IP时,能不能有这个,而不是按国家名,
比如我想要国外的,只要country=国外就OK了.
如果指定国家名,那就太多了.国内写**倒简单

你好,验证IP的时候为什么要使用多进程+协程

之前已经关注你了,新年快乐!
有个关于多进程+协程 的疑问?
因为我看教程都说,使用多进程的场景是:计算型密集型的程序才会用,进程可以真正使用多核。
线程:可以处理IO密集型的,去等待。
协程:也是事件驱动,适合解决等待的问题
是不是 多线程+协程,更适合呢。小白的浅见。。。。

OperationalError: unsupported file format

File "C:\IPProxyPool\db\SQLiteHelper.py", line 29, in createTable
    "country VARCHAR (20) NOT NULL,area VARCHAR (20) NOT NULL,updatetime TimeStamp NOT NULL DEFAULT (datetime('now','localtime')) ,speed DECIMAL(3,2) NOT NULL DEFAULT 100)"% self.tableName)
OperationalError: unsupported file format

运行IPProxys.py出现报错,unsupported file format.
ps: 我一个小时前还是可以用的,看了下proxy.db是乱码,点了下修改encoding。不清楚是不是error的原因。

很多僵死进程

hi,

  对python和gevent不太熟,运行工程会产生很多僵死进程,不知什么情况,好像mutiprocess也遇到这个问题,这对python来说是正常的吗?
   为了能及时打印出日志 nohup python -u 运行的。

操作系统:CentOS Linux release 7.2.1511
python:2.7.5

image

python3 一个小时过去了,为什么还是空数据库呢?

只安装了pymysql, 没有安装sqlite.
难道没有写入mysql数据库吗?

D:\IPProxyPool-master\IPProxyPool_py3>python IPProxy.py
IPProxyPool----->>>>>>>>beginning
IPProxyPool----->>>>>>>>db exists ip:0
IPProxyPool----->>>>>>>>now ip num < MINNUM,start crawling...
http://0.0.0.0:8000/

mysql> describe proxys;
+------------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+------------+--------------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| ip | varchar(16) | NO | | NULL | |
| port | int(11) | NO | | NULL | |
| types | int(11) | NO | | NULL | |
| protocol | int(11) | NO | | NULL | |
| country | varchar(100) | NO | | NULL | |
| area | varchar(100) | NO | | NULL | |
| updatetime | datetime | YES | | NULL | |
| speed | decimal(5,2) | NO | | NULL | |
| score | int(11) | NO | | NULL | |
+------------+--------------+------+-----+---------+----------------+
10 rows in set (0.02 sec)

mysql> select * from proxys;
Empty set (0.00 sec)

python3 运行 一直不动...

D:\IPProxyPool-master\IPProxyPool_py3>python IPProxy.py
IPProxyPool----->>>>>>>>beginning
IPProxyPool----->>>>>>>>db exists ip:0
IPProxyPool----->>>>>>>>now ip num < MINNUM,start crawling...
http://0.0.0.0:8000/
127.0.0.1:65199 - - [20/Dec/2016 16:19:56] "HTTP/1.1 GET /" - 200 OK
127.0.0.1:65208 - - [20/Dec/2016 16:21:32] "HTTP/1.1 GET /" - 200 OK
127.0.0.1:65218 - - [20/Dec/2016 16:21:32] "HTTP/1.1 GET /" - 200 OK
127.0.0.1:65208 - - [20/Dec/2016 16:21:32] "HTTP/1.1 GET /favicon.ico" - 404 Not Found
127.0.0.1:65208 - - [20/Dec/2016 16:21:33] "HTTP/1.1 GET /favicon.ico" - 404 Not Found

这是什么意思?能解答一下吗?

py3下出现问题

Traceback (most recent call last):
File "IPProxys/IPProxys.py", line 2, in
import BaseHTTPServer
ImportError: No module named 'BaseHTTPServer'

BaseHTTPServer 应该是2下面的吧,有什么能替代的吗?

MAXTIME没有用到

看到配置文件有这个:
MAXTIME = 3*24*60 #当爬取存储开始一直使用的最大时间,如果超过这个时间,都删除
但似乎没有地方调用这个常量,还有是不是少乘了个60

出错了.

File "G:\程序设计\已经完成\IPProxyPool\IPProxyPool_py3\util\IPAddress.py", line 79, in setIpRange
self.ipdb.seek(offset)
TypeError: 'float' object cannot be interpreted as an integer

代理ip的评分都是0

["118.123.245.150", 3128, 0], ["118.123.245.163", 3128, 0], ["118.123.245.165", 3128, 0], ["27.222.221.207", 9999, 0], ["112.246.244.145", 9999, 0], ["112.228.32.83", 8998, 0], ["119.190.177.231", 9999, 0], ["115.219.86.42", 9999, 0], ["222.73.22.117", 8998, 0]

扫描出来的代理ip 的评分怎么都是0。

你好,我用你的这个代码时报如下错误,不知道哪错了,谢谢赐教

Connected to pydev debugger (build 143.1559)
Exception AttributeError: "'NoneType' object has no attribute 'getpid'" in <Finalize object, dead> ignored
Error in sys.excepthook:
Traceback (most recent call last):
Error in sys.excepthook:
Traceback (most recent call last):
File "D:\Program Files (x86)\JetBrains\PyCharm 5.0.3\helpers\pydev\pydevd_breakpoints.py", line 89, in _excepthook
File "D:\Program Files (x86)\JetBrains\PyCharm 5.0.3\helpers\pydev\pydevd_breakpoints.py", line 89, in _excepthook
_original_excepthook(exctype, value, tb)
_original_excepthook(exctype, value, tb)
TypeErrorTypeError: : 'NoneType' object is not callable'NoneType' object is not callable

Original exception was:
Traceback (most recent call last):

Original exception was:
Traceback (most recent call last):
File "gevent\corecext.pyx", line 360, in gevent.corecext.loop.handle_error (gevent/gevent.corecext.c:6344)
File "gevent\corecext.pyx", line 360, in gevent.corecext.loop.handle_error (gevent/gevent.corecext.c:6344)
File "D:\Python27\lib\site-packages\gevent\hub.py", line 563, in handle_error
File "D:\Python27\lib\site-packages\gevent\hub.py", line 563, in handle_error
self.print_exception(context, type, value, tb)
self.print_exception(context, type, value, tb)
File "D:\Python27\lib\site-packages\gevent\hub.py", line 588, in print_exception
File "D:\Python27\lib\site-packages\gevent\hub.py", line 588, in print_exception
errstream = sys.stderr
errstream = sys.stderr
AttributeErrorAttributeError: : 'NoneType' object has no attribute 'stderr''NoneType' object has no attribute 'stderr'

进程已结束,退出代码0

AttributeError: 'NoneType' object has no attribute 'lower'

我运行一段时间(好多天)之后,出现下面的错误,能不能帮忙看一看
Exception in thread Thread-2:
Traceback (most recent call last):
File "C:\Anaconda2\lib\threading.py", line 801, in __bootstrap_inner
self.run()
File "C:\Anaconda2\lib\threading.py", line 754, in run
self.__target(*self.__args, **self.__kwargs)
File "IPProxys.py", line 30, in startSpider
spider.run()
File "E:\python\IPProxys\spider\ProxySpider.py", line 35, in run
proxys = self.crawl_pool.map(self.crawl,parserList)
File "C:\Anaconda2\lib\site-packages\gevent\pool.py", line 308, in map
return list(self.imap(func, iterable))
File "C:\Anaconda2\lib\site-packages\gevent\pool.py", line 102, in next
raise value.exc
AttributeError: 'NoneType' object has no attribute 'lower'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.