Coder Social home page Coder Social logo

scrapinode's Introduction

scrapinode - content driven and route based scraper

Dependency Status Code Climate

When use it?

When you want to retrieve information about the page pointed by the URL that your user has just copied, scrapinode is a great fit. First scrapinode comes out of the box with a great feature of 1 line of code, that will give you the title, the description and the images of any HTML page on the web. Second if you need more, you can extend it. See the examples to know more about it.

Features

  • Retrieve content like "title", "descriptions", "images", "videos" on any HTML pages with 1 line of code.
  • Define specific operations based on the url of the page and the content you hope retrieve with regex.
  • Scrape pages with jsdom + jquery or with cheerio.
  • The HTTP client takes care to handle HTTP and HTML redirections.
  • Scrape image like it was a HTML page.

Install

npm install scrapinode

Usage

var scrapinode = require('scrapinode');

// Define an operation for a specific route and content
scrapinode.use('society6.com','title',function(window){
  var $ = window.$;
  var url = window.location.href; // url of the page maybe you want to check for some reasons
  var title = $('h1[itemprop="name"]').text();
  if(!title) return null;
  return title;
});

// Use default operations for content like "title", "descriptions", "images", "videos"
scrapinode.useAll(scrapinode.defaults());

scrapinode.createScraper('http://society6.com/product/Sounds-Good-Dude_T-shirt',function(err,scraper){
   if(err) return console.error(err);
   var title = scraper.get('title');
   console.log(title); // "Sound Good dude"
});

Test

npm test

Test coverage

make coverage

Licence

(The MIT License)

Copyright (c) 2013-2015 Rémy Loubradou

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the 'Software'), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

Bitdeli Badge

scrapinode's People

Contributors

lbdremy avatar bitdeli-chef avatar

Stargazers

Martin Murciego avatar Michael Vogiatzis avatar Shobhit Singhal avatar  avatar Olivier Grenet avatar Mark Troutfetter avatar Jason Brown avatar Arjun Bajaj avatar  avatar Arthur Pham avatar  avatar Sveinung Tord Røsaker avatar Creston Froats avatar

Watchers

Jos Shepherd avatar  avatar James Cloos avatar edwardttril avatar  avatar

scrapinode's Issues

Consider 'routing' alternatives for module

My suggestion is that it would be better to have one file dealing with the different extractors for one particular web domain - e.g. one file, amazon.js that has the title, description, image etc extractors for Amazon.

JSDOM sometimes return a null "window" object

Hi,

I just found this in my logs : I'm trying to trace back the offending URL, will update the ticket when found :

./node_modules/scrapinode/lib/browser.js:138
               window.location.href = url;
                     ^
TypeError: Cannot read property 'location' of undefined
    at jsdom.env.done (./node_modules/scrapinode/lib/browser.js:138:22)
    at processHTML (./node_modules/scrapinode/node_modules/jsdom/lib/jsdom.js:155:14)
    at fs.js:207:20
    at Object.oncomplete (fs.js:107:15)

Possible memory leak

(node) warning: possible EventEmitter memory leak detected. 11 listeners added. Use emitter.setMaxListeners() to increase limit.
Trace: 
    at Request.<anonymous> (events.js:126:17)
    at Request.request (/home/lbdremy/workspace/nodejs/hippo/node_modules/scrapinode/node_modules/request/main.js:198:27)
    at request (/home/lbdremy/workspace/nodejs/hippo/node_modules/scrapinode/node_modules/request/main.js:583:5)
    at ClientRequest.<anonymous> (/home/lbdremy/workspace/nodejs/hippo/node_modules/scrapinode/node_modules/request/main.js:398:9)
    at ClientRequest.g (events.js:143:14)
    at ClientRequest.emit (events.js:64:17)
    at HTTPParser.onIncoming (http.js:1354:9)
    at HTTPParser.onHeadersComplete (http.js:108:31)
    at Socket.ondata (http.js:1228:22)
    at Socket._onReadable (net.js:683:27)
(node) warning: possible EventEmitter memory leak detected. 11 listeners added. Use emitter.setMaxListeners() to increase limit.
Trace: 
    at Request.<anonymous> (events.js:126:17)
    at Request.once (events.js:147:8)
    at Request.request (/home/lbdremy/workspace/nodejs/hippo/node_modules/scrapinode/node_modules/request/main.js:499:8)
    at request (/home/lbdremy/workspace/nodejs/hippo/node_modules/scrapinode/node_modules/request/main.js:583:5)
    at ClientRequest.<anonymous> (/home/lbdremy/workspace/nodejs/hippo/node_modules/scrapinode/node_modules/request/main.js:398:9)
    at ClientRequest.g (events.js:143:14)
    at ClientRequest.emit (events.js:64:17)
    at HTTPParser.onIncoming (http.js:1354:9)
    at HTTPParser.onHeadersComplete (http.js:108:31)
    at Socket.ondata (http.js:1228:22)
(node) warning: possible EventEmitter memory leak detected. 11 listeners added. Use emitter.setMaxListeners() to increase limit.
Trace: 
    at Request.<anonymous> (events.js:126:17)
    at Request.request (/home/lbdremy/workspace/nodejs/hippo/node_modules/scrapinode/node_modules/request/main.js:198:27)
    at request (/home/lbdremy/workspace/nodejs/hippo/node_modules/scrapinode/node_modules/request/main.js:583:5)
    at ClientRequest.<anonymous> (/home/lbdremy/workspace/nodejs/hippo/node_modules/scrapinode/node_modules/request/main.js:398:9)
    at ClientRequest.g (events.js:143:14)
    at ClientRequest.emit (events.js:64:17)
    at HTTPParser.onIncoming (http.js:1354:9)
    at HTTPParser.onHeadersComplete (http.js:108:31)
    at Socket.ondata (http.js:1228:22)
    at Socket._onReadable (net.js:683:27)
(node) warning: possible EventEmitter memory leak detected. 11 listeners added. Use emitter.setMaxListeners() to increase limit.
Trace: 
    at Request.<anonymous> (events.js:126:17)
    at Request.once (events.js:147:8)
    at Request.request (/home/lbdremy/workspace/nodejs/hippo/node_modules/scrapinode/node_modules/request/main.js:499:8)
    at request (/home/lbdremy/workspace/nodejs/hippo/node_modules/scrapinode/node_modules/request/main.js:583:5)
    at ClientRequest.<anonymous> (/home/lbdremy/workspace/nodejs/hippo/node_modules/scrapinode/node_modules/request/main.js:398:9)
    at ClientRequest.g (events.js:143:14)
    at ClientRequest.emit (events.js:64:17)
    at HTTPParser.onIncoming (http.js:1354:9)
    at HTTPParser.onHeadersComplete (http.js:108:31)
    at Socket.ondata (http.js:1228:22)

GZIP/Zlib errors cause Scrapinode to stop

When a website scraping throws a Zlib error, currently Scrapinode stops because these errors are not trapped :

events.js:72
        throw er; // Unhandled 'error' event
              ^
Error: incorrect header check
    at Zlib._binding.onerror (zlib.js:295:17)
  • When adding :
unzip.on('error', function(e){
      console.error("ZLIB ERROR "+e);
  }); 

to utils.js, Scrapinode doesn't crash anymore, and the ZLIB error is reported as :

ZLIB ERROR Error: incorrect header check

jsdom crash: Cannot call method 'appendChild' of null

Error with this url : http://www.victoriassecret.com/ss/Satellite?ProductID=1265748504313&c=Page&cid=1329190978166&pagename=vsdWrapper

TypeError: Cannot call method 'appendChild' of null
    at /home/lbdremy/workspace/nodejs/hippo/node_modules/scrapinode/node_modules/jsdom/lib/jsdom.js:250:41
    at Array.forEach (native)
    at /home/lbdremy/workspace/nodejs/hippo/node_modules/scrapinode/node_modules/jsdom/lib/jsdom.js:233:18
    at Object.env (/home/lbdremy/workspace/nodejs/hippo/node_modules/scrapinode/node_modules/jsdom/lib/jsdom.js:262:5)
    at Request.onend [as _callback] (/home/lbdremy/workspace/nodejs/hippo/node_modules/scrapinode/lib/browser/browser-load.js:73:19)
    at Request.callback (/home/lbdremy/workspace/nodejs/hippo/node_modules/scrapinode/node_modules/request/main.js:119:22)
    at Request.<anonymous> (/home/lbdremy/workspace/nodejs/hippo/node_modules/scrapinode/node_modules/request/main.js:525:16)
    at Request.emit (events.js:64:17)
    at IncomingMessage.<anonymous> (/home/lbdremy/workspace/nodejs/hippo/node_modules/scrapinode/node_modules/request/main.js:484:14)
    at IncomingMessage.emit (events.js:81:20)

Timeout necessary

Sometimes some websites have sort of anti-scraping measures... maybe blocking access from Amazon IPs.

We need a timeout so that if scrapinode tries to access a very slow page we quit at some reasonable time, say 20s.

Add header to accept only html/xml/text content

 
var headers = {
      'accept' : 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
      'user-agent' : 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.1 (KHTML, like Gecko) Chrome/13.0.782.107 Safari/535.1'
   }
 

Add support to accept gzip

Add the header Accept-Encoding:gzip and decode the gzip file. The download of the page will be faster!

Add 'timeout' and 'user-agent' to options

It would be nice to be able to set the timeout as an option, per instance created : some sites are so slow.

Also, being able to set a user-agent per instance would be more flexible.

Cannot read property '$' of undefined

22 Aug 16:34:57 - emerg: TypeError message=Cannot read property '$' of undefined, stack=TypeError: Cannot read property '$' of undefined
    at exports.load.jsdom.env.done (/home/lbdremy/workspace/nodejs/hippo/node_modules/scrapinode/lib/browser/browser-load.js:81:39)
    at exports.env.exports.jsdom.env.processHTML (/home/lbdremy/workspace/nodejs/hippo/node_modules/scrapinode/node_modules/jsdom/lib/jsdom.js:149:14)
    at fs.js:117:20
    at Object.oncomplete (fs.js:297:15)

Scrapinode crash if body is empty

/home/lbdremy/workspace/nodejs/hippo/node_modules/scrapinode/node_modules/jsdom/lib/jsdom.js:354
      throw new Error("jsdom.env requires a '" + req + "' argument");
            ^
Error: jsdom.env requires a 'html' argument
    at /home/lbdremy/workspace/nodejs/hippo/node_modules/scrapinode/node_modules/jsdom/lib/jsdom.js:354:13
    at Array.forEach (native)
    at Function.processArguments (/home/lbdremy/workspace/nodejs/hippo/node_modules/scrapinode/node_modules/jsdom/lib/jsdom.js:351:12)
    at Object.env (/home/lbdremy/workspace/nodejs/hippo/node_modules/scrapinode/node_modules/jsdom/lib/jsdom.js:142:29)
    at Request.onend [as _callback] (/home/lbdremy/workspace/nodejs/hippo/node_modules/scrapinode/lib/browser/browser-load.js:45:19)
    at Request.callback (/home/lbdremy/workspace/nodejs/hippo/node_modules/scrapinode/node_modules/request/main.js:109:22)
    at Request.<anonymous> (/home/lbdremy/workspace/nodejs/hippo/node_modules/scrapinode/node_modules/request/main.js:481:18)
    at Request.emit (events.js:64:17)
    at IncomingMessage.<anonymous> (/home/lbdremy/workspace/nodejs/hippo/node_modules/scrapinode/node_modules/request/main.js:442:16)
    at IncomingMessage.emit (events.js:81:20)

Use the markup schema describes at http://schema.org/

This site provides a collection of schemas, i.e., html tags, that webmasters can use to markup their pages in ways recognized by major search providers. Search engines including Bing, Google, Yahoo! and Yandex rely on this markup to improve the display of search results, making it easier for people to find the right web pages.

Use the markup schema describes at http://schema.org/ like search engines do to find relevant informations about the page in the file scrapules/default.js.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.