Coder Social home page Coder Social logo

xuxueli / xxl-crawler Goto Github PK

View Code? Open in Web Editor NEW
685.0 49.0 297.0 390 KB

A distributed web crawler framework.(分布式爬虫框架XXL-CRAWLER)

Home Page: http://www.xuxueli.com/xxl-crawler

License: Apache License 2.0

Java 100.00%
crawler web spider object-oriented flexible xxl-crawler java distributed

xxl-crawler's Introduction

XXL-CRAWLER

XXL-CRAWLER, a distributed web crawler framework.
-- Home Page --

Introduction

XXL-CRAWLER is a distributed web crawler framework. One line of code develops a distributed crawler. Features such as "multithreaded、asynchronous、dynamic IP proxy、distributed、javascript-rendering".

XXL-CRAWLER 是一个分布式爬虫框架。一行代码开发一个分布式爬虫,拥有"多线程、异步、IP动态代理、分布式、JS渲染"等特性;

Documentation

Features

  • 1、简洁:API直观简洁,可快速上手;
  • 2、轻量级:底层实现仅强依赖jsoup,简洁高效;
  • 3、模块化:模块化的结构设计,可轻松扩展
  • 4、面向对象:支持通过注解,方便的映射页面数据到PageVO对象,底层自动完成PageVO对象的数据抽取和封装返回;单个页面支持抽取一个或多个PageVO
  • 5、多线程:线程池方式运行,提高采集效率;
  • 6、分布式支持:通过扩展 "RunData" 模块,并结合Redis或DB共享运行数据可实现分布式。默认提供LocalRunData单机版爬虫。
  • 7、JS渲染:通过扩展 "PageLoader" 模块,支持采集JS动态渲染数据。原生提供 Jsoup(非JS渲染,速度更快)、HtmlUnit(JS渲染)、Selenium+Phantomjs(JS渲染,兼容性高) 等多种实现,支持自由扩展其他实现。
  • 8、失败重试:请求失败后重试,并支持设置重试次数;
  • 9、代理IP:对抗反采集策略规则WAF;
  • 10、动态代理:支持运行时动态调整代理池,以及自定义代理池路由策略;
  • 11、异步:支持同步、异步两种方式运行;
  • 12、扩散全站:支持以现有URL为起点扩散爬取整站;
  • 13、去重:防止重复爬取;
  • 14、URL白名单:支持设置页面白名单正则,过滤URL;
  • 15、自定义请求信息,如:请求参数、Cookie、Header、UserAgent轮询、Referrer等;
  • 16、动态参数:支持运行时动态调整请求参数;
  • 17、超时控制:支持设置爬虫请求的超时时间;
  • 18、主动停顿:爬虫线程处理完页面之后进行主动停顿,避免过于频繁被拦截;

Communication

Contributing

Contributions are welcome! Open a pull request to fix a bug, or open an Issue to discuss a new feature or change.

欢迎参与项目贡献!比如提交PR修复一个bug,或者新建 Issue 讨论新特性或者变更。

接入登记

更多接入的公司,欢迎在 登记地址 登记,登记仅仅为了产品推广。

Copyright and License

This product is open source and free, and will continue to provide free community technical support. Individual or enterprise users are free to access and use.

  • Licensed under the Apache License, Version 2.0.
  • Copyright (c) 2015-present, xuxueli.

产品开源免费,并且将持续提供免费的社区技术支持。个人或企业内部可自由的接入和使用。

Donate

No matter how much the amount is enough to express your thought, thank you very much :) To donate

无论金额多少都足够表达您这份心意,非常感谢 :) 前往捐赠

xxl-crawler's People

Contributors

dependabot[bot] avatar jnan77 avatar xuxueli avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

xxl-crawler's Issues

发送post请求时返回400

你好,我在测试用例中没有找到post请求的模板调用

这是我的调用代码
` Map<String,String> dataMap = new HashMap<>();
dataMap.put("category","**");
dataMap.put("currentPage","1");
dataMap.put("pageSize","30");

    Map<String,String> headerMap = new HashMap<>();
    headerMap.put("Accept-Encoding","gzip");
    headerMap.put("Content-Type","application/json;charset=UTF-8");
    headerMap.put("User-Agent","Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36");

    XxlCrawler xxlCrawler = new XxlCrawler.Builder()
            .setUrls(url)
            .setAllowSpread(false)
            .setIfPost(true)
            .setHeaderMap(headerMap)
            .setParamMap(dataMap)
            .setPageParser(new PageParser() {
                @Override
                public void parse(Document html, Element pageVoElement, Object pageVo) {
                    XxlJobLogger.log("html:{}",html);
                }
            })
            .build();
    xxlCrawler.start(true);
    return SUCCESS;`

这是报错:
org.jsoup.HttpStatusException: HTTP error fetching URL. Status=400

线程安全问题

LocalRunData 中 使用 LinkedBlockingQueue 来记录需要爬取的url, 这是一个线程安全的队列, 还需要加 volatile 关键字吗 ?

[issue] 多线程情况下,tryFinish()很小的概率会误判当前运行状态

  • issue description

多线程情况下,tryFinish()会误判CrawlerThread的运行状态,导致提前stop,以下是运行XxlCrawlerTest,开启3个thread,并打印日志:
image

概率比较小,大概试10次能出现一次,原因可能如下:
thread-3调用tryFinish()并提前获取了3个CrawlerThread的isRunning状态均为false,刚好此时thread-1调用了crawler.getRunData().getUrl()并将running设为true(但thread-3已经无法知晓),最后thread-3判断runData.getUrlNum()==0为true,由此isEnd为true,导致了误判:
image

  • solution
  1. 改写tryFinish(),先判断runData.getUrlNum()==0,再逐一获取CrawlerThread的状态,防止调用crawler.getRunData().getUrl()无法获取running的最新状态:
public void tryFinish(){
    boolean isEnd = runData.getUrlNum()==0;
    boolean isRunning = false;
    for (CrawlerThread crawlerThread: crawlerThreads) {
        if (crawlerThread.isRunning()) {
            isRunning = true;
            break;
        }
    }
    isEnd = isEnd && !isRunning;
    if (isEnd) {
        logger.info(">>>>>>>>>>> xxl crawler is finished.");
        stop();
    }
}
  1. CrawlerThread的running参数加上volatile关键字,保证可见性:
private volatile boolean running;

扩散全站功能异常问题.

打开了扩散全站的功能, 但是在 JsoupUtil.findLinks()方法中筛选到的url不全, 标签获得的href是相对路径, 不是决定路径. 使用下面三种方法获得的值全部是相对路径, 校验url不通过导致, 扩散爬取失败, 大佬有遇到过这种情况吗 ?
tips: 使用 JS渲染方式采集数据,"selenisum + phantomjs" 方案

  1. item.absUrl("abs:href");
  2. item.attr("abs:href");
  3. item.attr("href");

爬取的url是 http://www.bootcss.com/

com.xuxueli.crawler.thread.CrawlerThread#processPage问题

com.xuxueli.crawler.thread.CrawlerThread#processPage中以下代码应该return false比较合适吧?

if (!crawler.getRunConf().validWhiteUrl(pageRequest.getUrl())) {     // limit unvalid-page parse, only allow spread child, finish here
            return true;
        }

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.