Coder Social home page Coder Social logo

web_crawler's Introduction

Web crawler help you with parse and collect data from the web

==How it works.


```ruby
class StackoverflowCrawler < WebCrawler::Base

    target "http://stackoverflow.com/questions/tagged/:tag", :tag=> %w{ruby ruby-on-rails ruby-on-rails-3}
    logger "path/to/log/file" # or Logger.new(...)

    cache_to '/tmp/cache/stackoverflow'

    context "#questions .question-summary", :jobs do

      #TODO: defaults :format => lambda{ |v| v.to_i }

      map '.vote-count-post strong', :to => :vote_count, :format => lambda{ |v| v.to_i }
      map '.views', :to => :view_count, :format => lambda{ |v| v.match(/\d+/)[0].to_i }
      map '.status strong', :to => :answer_count, :format => lambda{ |v| v.to_i }
      map '.summary h3 a', :to => :title, :format => lambda{ |v| v.to_i }
      map '.summary .excerpt', :to => :excerpt, :format => lambda{ |v| v.to_i }
      map '.user-action-time .relativetime', :to => :posted_at, :on => [:attr, :title]
      map '.tags .post-tag', :to => :tags, :format => lambda{ |v| v.to_i }

    end
end
```

#TODO
 1. Add documentation
 2. ...
 3. PROFIT!!!1
 (:

web_crawler's People

Contributors

renatocassino avatar webgago avatar

Stargazers

 avatar  avatar  avatar

Watchers

 avatar  avatar

web_crawler's Issues

Cache requests

Interface of feature

CLI

web_crawler get www.google.com/news --cached

Program

include WebCrawler

batch = BatchRequest.new(urls, cached: true)
batch.process #=> if get urls at first time then executed Request via Net::HttpRequest

batch2 = BatchRequest.new(urls, cached: true)
batch2.process #=> else executed CachedRequest and will be loaded from files with cache

Factory urls feature

Interface of feature

CLI

web_crawler factory "www.google.com/$1?param=$2" "%w{news reader mail}" "(0..10).map{ |i| i * i }"

Program

include WebCrawler

urls = FactoryUrl.new("www.google.com/$1?param=$2", %w{news reader mail}, 0..10)
batch = BatchRequest.new(urls.factory)
# urls.factory == [
#        "www.google.com/news?param=0", 
#        "www.google.com/news?param=1",
#        ...
#        "www.google.com/mail?param=10",]

batch.process 

Follow the links feature

Interface of feature

CLI

web_crawler get www.google.com/news --follow "example\.com/.*"(REGEXP)

Program

include WebCrawler

batch = BatchRequest.new(urls, follow: /example\.com\/.*/)
batch.process 
batch.followed #=> array of links which followed   

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.