Coder Social home page Coder Social logo

buren / wayback_archiver Goto Github PK

View Code? Open in Web Editor NEW
57.0 8.0 11.0 166 KB

Ruby gem to send URLs to Wayback Machine

Home Page: https://rubygems.org/gems/wayback_archiver

License: MIT License

Ruby 100.00%
wayback-machine internet-archive ruby wayback-archiver rubygem

wayback_archiver's Introduction

WaybackArchiver

Post URLs to Wayback Machine (Internet Archive), using a crawler, from Sitemap(s), or a list of URLs.

The Wayback Machine is a digital archive of the World Wide Web [...] The service enables users to see archived versions of web pages across time ...
- Wikipedia

Build Status Code Climate Docs badge Gem Version

Index

Installation

Install the gem:

$ gem install wayback_archiver

Or add this line to your application's Gemfile:

gem 'wayback_archiver'

And then execute:

$ bundle

Usage

Strategies:

  • auto (the default) - Will try to
    1. Find Sitemap(s) defined in /robots.txt
    2. Then in common sitemap locations /sitemap-index.xml, /sitemap.xml etc.
    3. Fallback to crawling (using the excellent spidr gem)
  • sitemap - Parse Sitemap(s), supports index files (and gzip)
  • urls - Post URL(s)

Ruby

First require the gem

require 'wayback_archiver'

Examples:

Auto

# auto is the default
WaybackArchiver.archive('example.com')

# or explicitly
WaybackArchiver.archive('example.com', strategy: :auto)

Crawl

WaybackArchiver.archive('example.com',  strategy: :crawl)

Only send one single URL

WaybackArchiver.archive('example.com', strategy: :url)

Send multiple URLs

WaybackArchiver.archive(%w[example.com www.example.com], strategy: :urls)

Send all URL(s) found in Sitemap

WaybackArchiver.archive('example.com/sitemap.xml', strategy: :sitemap)

# works with Sitemap index files too
WaybackArchiver.archive('example.com/sitemap-index.xml.gz', strategy: :sitemap)

Specify concurrency

WaybackArchiver.archive('example.com', strategy: :auto, concurrency: 10)

Specify max number of URLs to be archived

WaybackArchiver.archive('example.com', strategy: :auto, limit: 10)

Each archive strategy can receive a block that will be called for each URL

WaybackArchiver.archive('example.com', strategy: :auto) do |result|
  if result.success?
    puts "Successfully archived: #{result.archived_url}"
  else
    puts "Error (HTTP #{result.code}) when archiving: #{result.archived_url}"
  end
end

Use your own adapter for posting found URLs

WaybackArchiver.adapter = ->(url) { puts url } # whatever that responds to #call

CLI

Usage:

wayback_archiver [<url>] [options]

Print full usage instructions

wayback_archiver --help

Examples:

Auto

# auto is the default
wayback_archiver example.com

# or explicitly
wayback_archiver example.com --auto

Crawl

wayback_archiver example.com --crawl

Only send one single URL

wayback_archiver example.com --url

Send multiple URLs

wayback_archiver example.com www.example.com --urls

Crawl multiple URLs

wayback_archiver example.com www.example.com --crawl

Send all URL(s) found in Sitemap

wayback_archiver example.com/sitemap.xml

# works with Sitemap index files too
wayback_archiver example.com/sitemap-index.xml.gz

Most options

wayback_archiver example.com www.example.com --auto --concurrency=10 --limit=100 --log=output.log --verbose

View archive: https://web.archive.org/web/*/http://example.com (replace http://example.com with to your desired domain).

Configuration

ℹ️ By default wayback_archiver doesn't respect robots.txt files, see this Internet Archive blog post for more information.

Configuration (the below values are the defaults)

WaybackArchiver.concurrency = 1
WaybackArchiver.user_agent = WaybackArchiver::USER_AGENT
WaybackArchiver.respect_robots_txt = WaybackArchiver::DEFAULT_RESPECT_ROBOTS_TXT
WaybackArchiver.logger = Logger.new(STDOUT)
WaybackArchiver.max_limit = -1 # unlimited
WaybackArchiver.adapter = WaybackArchiver::WaybackMachine # must implement #call(url)

For a more verbose log you can configure WaybackArchiver as such:

WaybackArchiver.logger = Logger.new(STDOUT).tap do |logger|
  logger.progname = 'WaybackArchiver'
  logger.level = Logger::DEBUG
end

Pro tip: If you're using the gem in a Rails app you can set WaybackArchiver.logger = Rails.logger.

Docs

You can find the docs online on RubyDoc.

This gem is documented using yard (run from the root of this repository).

yard # Generates documentation to doc/

Contributing

Contributions, feedback and suggestions are very welcome.

  1. Fork it
  2. Create your feature branch (git checkout -b my-new-feature)
  3. Commit your changes (git commit -am 'Add some feature')
  4. Push to the branch (git push origin my-new-feature)
  5. Create new Pull Request

License

MIT License

References

wayback_archiver's People

Contributors

bartman081523 avatar buren avatar dependabot-preview[bot] avatar dependabot-support avatar jgarber623 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

wayback_archiver's Issues

Make crawler threaded

Also enable configurable timeout, so that the user can tweak how fast URLs are fetched/submitted.

`check_path': path conflicts with opaque (URI::InvalidURIError)

Traceback (most recent call last):
28: from /home/user/.gem/ruby/2.5.0/bin/wayback_archiver:23:in <main>' 27: from /home/user/.gem/ruby/2.5.0/bin/wayback_archiver:23:in load'
26: from /home/user/.gem/ruby/2.6.0/gems/wayback_archiver-1.2.1/bin/wayback_archiver:81:in <top (required)>' 25: from /home/user/.gem/ruby/2.6.0/gems/wayback_archiver-1.2.1/bin/wayback_archiver:81:in each'
24: from /home/user/.gem/ruby/2.6.0/gems/wayback_archiver-1.2.1/bin/wayback_archiver:82:in block in <top (required)>' 23: from /home/user/.gem/ruby/2.6.0/gems/wayback_archiver-1.2.1/lib/wayback_archiver.rb:50:in archive'
22: from /home/user/.gem/ruby/2.6.0/gems/wayback_archiver-1.2.1/lib/wayback_archiver.rb:91:in crawl' 21: from /home/user/.gem/ruby/2.6.0/gems/wayback_archiver-1.2.1/lib/wayback_archiver/archive.rb:75:in crawl'
20: from /home/user/.gem/ruby/2.6.0/gems/wayback_archiver-1.2.1/lib/wayback_archiver/url_collector.rb:37:in crawl' 19: from /home/user/.gem/ruby/2.6.0/gems/spidr-0.6.0/lib/spidr/spidr.rb:53:in site'
18: from /home/user/.gem/ruby/2.6.0/gems/spidr-0.6.0/lib/spidr/agent.rb:274:in site' 17: from /home/user/.gem/ruby/2.6.0/gems/spidr-0.6.0/lib/spidr/agent.rb:355:in start_at'
16: from /home/user/.gem/ruby/2.6.0/gems/spidr-0.6.0/lib/spidr/agent.rb:373:in run' 15: from /home/user/.gem/ruby/2.6.0/gems/spidr-0.6.0/lib/spidr/agent.rb:665:in visit_page'
14: from /home/user/.gem/ruby/2.6.0/gems/spidr-0.6.0/lib/spidr/agent.rb:599:in get_page' 13: from /home/user/.gem/ruby/2.6.0/gems/spidr-0.6.0/lib/spidr/agent.rb:788:in prepare_request'
12: from /home/user/.gem/ruby/2.6.0/gems/spidr-0.6.0/lib/spidr/agent.rb:605:in block in get_page' 11: from /home/user/.gem/ruby/2.6.0/gems/spidr-0.6.0/lib/spidr/agent.rb:679:in block in visit_page'
10: from /home/user/.gem/ruby/2.6.0/gems/spidr-0.6.0/lib/spidr/page/html.rb:238:in each_url' 9: from /home/user/.gem/ruby/2.6.0/gems/spidr-0.6.0/lib/spidr/page/html.rb:188:in each_link'
8: from /home/user/.gem/ruby/2.6.0/gems/nokogiri-1.10.1/lib/nokogiri/xml/node_set.rb:237:in each' 7: from /home/user/.gem/ruby/2.6.0/gems/nokogiri-1.10.1/lib/nokogiri/xml/node_set.rb:237:in upto'
6: from /home/user/.gem/ruby/2.6.0/gems/nokogiri-1.10.1/lib/nokogiri/xml/node_set.rb:238:in block in each' 5: from /home/user/.gem/ruby/2.6.0/gems/spidr-0.6.0/lib/spidr/page/html.rb:189:in block in each_link'
4: from /home/user/.gem/ruby/2.6.0/gems/spidr-0.6.0/lib/spidr/page/html.rb:182:in block in each_link' 3: from /home/user/.gem/ruby/2.6.0/gems/spidr-0.6.0/lib/spidr/page/html.rb:239:in block in each_url'
2: from /home/user/.gem/ruby/2.6.0/gems/spidr-0.6.0/lib/spidr/page/html.rb:283:in to_absolute' 1: from /usr/lib/ruby/2.6.0/uri/generic.rb:807:in path='
/usr/lib/ruby/2.6.0/uri/generic.rb:753:in `check_path': path conflicts with opaque (URI::InvalidURIError)

cannot load such file -- robots (LoadError) with ruby verison 2.5

wayback_archiver
Traceback (most recent call last):
10: from /usr/local/bin/wayback_archiver:23:in <main>' 9: from /usr/local/bin/wayback_archiver:23:in load'
8: from /var/lib/gems/2.7.0/gems/wayback_archiver-1.4.0/bin/wayback_archiver:4:in <top (required)>' 7: from /usr/lib/ruby/vendor_ruby/rubygems/core_ext/kernel_require.rb:85:in require'
6: from /usr/lib/ruby/vendor_ruby/rubygems/core_ext/kernel_require.rb:85:in require' 5: from /var/lib/gems/2.7.0/gems/wayback_archiver-1.4.0/lib/wayback_archiver.rb:4:in <top (required)>'
4: from /usr/lib/ruby/vendor_ruby/rubygems/core_ext/kernel_require.rb:85:in require' 3: from /usr/lib/ruby/vendor_ruby/rubygems/core_ext/kernel_require.rb:85:in require'
2: from /var/lib/gems/2.7.0/gems/wayback_archiver-1.4.0/lib/wayback_archiver/url_collector.rb:2:in <top (required)>' 1: from /usr/lib/ruby/vendor_ruby/rubygems/core_ext/kernel_require.rb:85:in require'
/usr/lib/ruby/vendor_ruby/rubygems/core_ext/kernel_require.rb:85:in require': cannot load such file -- robots (LoadError) 11: from /usr/local/bin/wayback_archiver:23:in

'
10: from /usr/local/bin/wayback_archiver:23:in load' 9: from /var/lib/gems/2.7.0/gems/wayback_archiver-1.4.0/bin/wayback_archiver:4:in <top (required)>'
8: from /usr/lib/ruby/vendor_ruby/rubygems/core_ext/kernel_require.rb:85:in require' 7: from /usr/lib/ruby/vendor_ruby/rubygems/core_ext/kernel_require.rb:85:in require'
6: from /var/lib/gems/2.7.0/gems/wayback_archiver-1.4.0/lib/wayback_archiver.rb:4:in <top (required)>' 5: from /usr/lib/ruby/vendor_ruby/rubygems/core_ext/kernel_require.rb:85:in require'
4: from /usr/lib/ruby/vendor_ruby/rubygems/core_ext/kernel_require.rb:85:in require' 3: from /var/lib/gems/2.7.0/gems/wayback_archiver-1.4.0/lib/wayback_archiver/url_collector.rb:2:in <top (required)>'
2: from /usr/lib/ruby/vendor_ruby/rubygems/core_ext/kernel_require.rb:149:in require' 1: from /usr/lib/ruby/vendor_ruby/rubygems/core_ext/kernel_require.rb:160:in rescue in require'
/usr/lib/ruby/vendor_ruby/rubygems/core_ext/kernel_require.rb:160:in require': cannot load such file -- robots (LoadError) user@tp-ubuntu:~/Downloads/wayback_archiver$ sudo gem install robots Successfully installed robots-0.10.1 Parsing documentation for robots-0.10.1 Done installing documentation for robots after 0 seconds 1 gem installed user@tp-ubuntu:~/Downloads/wayback_archiver$ wayback_archiver Traceback (most recent call last): 10: from /usr/local/bin/wayback_archiver:23:in '
9: from /usr/local/bin/wayback_archiver:23:in load' 8: from /var/lib/gems/2.7.0/gems/wayback_archiver-1.4.0/bin/wayback_archiver:4:in <top (required)>'
7: from /usr/lib/ruby/vendor_ruby/rubygems/core_ext/kernel_require.rb:85:in require' 6: from /usr/lib/ruby/vendor_ruby/rubygems/core_ext/kernel_require.rb:85:in require'
5: from /var/lib/gems/2.7.0/gems/wayback_archiver-1.4.0/lib/wayback_archiver.rb:4:in <top (required)>' 4: from /usr/lib/ruby/vendor_ruby/rubygems/core_ext/kernel_require.rb:85:in require'
3: from /usr/lib/ruby/vendor_ruby/rubygems/core_ext/kernel_require.rb:85:in require' 2: from /var/lib/gems/2.7.0/gems/wayback_archiver-1.4.0/lib/wayback_archiver/url_collector.rb:2:in <top (required)>'
1: from /usr/lib/ruby/vendor_ruby/rubygems/core_ext/kernel_require.rb:85:in require' /usr/lib/ruby/vendor_ruby/rubygems/core_ext/kernel_require.rb:85:in require': cannot load such file -- robots (LoadError)
11: from /usr/local/bin/wayback_archiver:23:in <main>' 10: from /usr/local/bin/wayback_archiver:23:in load'
9: from /var/lib/gems/2.7.0/gems/wayback_archiver-1.4.0/bin/wayback_archiver:4:in <top (required)>' 8: from /usr/lib/ruby/vendor_ruby/rubygems/core_ext/kernel_require.rb:85:in require'
7: from /usr/lib/ruby/vendor_ruby/rubygems/core_ext/kernel_require.rb:85:in require' 6: from /var/lib/gems/2.7.0/gems/wayback_archiver-1.4.0/lib/wayback_archiver.rb:4:in <top (required)>'
5: from /usr/lib/ruby/vendor_ruby/rubygems/core_ext/kernel_require.rb:85:in require' 4: from /usr/lib/ruby/vendor_ruby/rubygems/core_ext/kernel_require.rb:85:in require'
3: from /var/lib/gems/2.7.0/gems/wayback_archiver-1.4.0/lib/wayback_archiver/url_collector.rb:2:in <top (required)>' 2: from /usr/lib/ruby/vendor_ruby/rubygems/core_ext/kernel_require.rb:149:in require'
1: from /usr/lib/ruby/vendor_ruby/rubygems/core_ext/kernel_require.rb:160:in rescue in require' /usr/lib/ruby/vendor_ruby/rubygems/core_ext/kernel_require.rb:160:in require': cannot load such file -- robots (LoadError)
user@tp-ubuntu:~/Downloads/wayback_archiver$ ruby --version
ruby 2.7.4p191 (2021-07-07 revision a21a3b7d23) [x86_64-linux-gnu]

RESOLUTION: (funtioning, but not valid anymore, see my last comment)

sudo apt purge ruby
sudo apt autoremove
sudo apt install curl
sudo apt install git
command curl -sSL https://rvm.io/mpapis.asc | gpg --import -
command curl -sSL https://rvm.io/pkuczynski.asc | gpg --import -
\curl -sSL https://get.rvm.io | bash -s stable
source /home/$USER/.rvm/scripts/rvm
rvm install ruby-3
git clone https://github.com/buren/wayback_archiver
cd wayback_archiver
gem build wayback_archiver.gemspec
gem install wayback_archiver-1.4.0.gem 
wayback_archiver --url www.example.com
echo "source /home/$USER/.rvm/scripts/rvm" >> /home/$USER/.bashrc

Invalid gemspec in [wayback_archiver.gemspec]

As i run gem build wayback_archiver.gemspec:
Invalid gemspec in [wayback_archiver.gemspec]: wayback_archiver.gemspec:30: syntax error, unexpected string literal, expecting end' ...ec.add_development_dependency 'bundler' ... ^ wayback_archiver.gemspec:39: syntax error, unexpected end', expecting end-of-input
ERROR: Error loading gemspec. Aborting.

Jekyll integration

It would be awesome to automate the process of saving all the pages of a Jekyll website to the Wayback Machine every time the website is deployed.

I suppose it would be quite simple to make it a Jekyll plugin

Uncaught timeout exception in http.rb

I got this traceback:

/usr/lib/ruby/2.7.0/net/http.rb:960:in `initialize': execution expired (Net::OpenTimeout)

This was with the wayback_archive 1.4.0 gem, reproducible with latest github code.

I'm not sure if this project is still maintained, but thanks anyway.

Dan

Rate limiting – HTTP 429, Too Many Requests

The Internet Archive has started to more aggressively rate limit requests and after just a dozen or so requests (with the default concurrency setting 5).

After some testing we even get rate limiting with concurrency set to 1.

To fix this we have to implement a way to throttle requests in order to successfully submit all URLs.

🔗 Related to #22.

Road to v2

  • Optimize for Ruby 3.0, 2.7 is EOL March 31, 2023 so should still be supported in v2 (in my opinion)
  • Handle rate limiting HTTP 429, see #32.
  • Report functionality
    • --report flag with CSV output for the CLI
    • Return some report object when used in Ruby
  • Go for backwards compatibility, might do some breaking chances if the API would be significantly improved
  • Perhaps support other archiving services, like archive.is
  • Proper configuration with configure block, WaybackArchiver.configure { |c| ... } instead of using top-level functions like WaybackArchiver.user_agent= (current configuration)
  • ... and more

Happy for any input!

most URLs sent to Wayback have 502 Bad Gateway status

Running Wayback_archiver on https://www.bia.gov (and also on one of my own sites as a test) I see that most of the URLS in the list sent to the internet archive have 502 Bad Gateway listed in the status. A few give 200s and a few 404s too so it isn't everything. Is this expected or a problem with the site itself?

Found: https://www.bia.gov/training/AOTR/index.html
Found: https://www.bia.gov/training/PL638/index.html
=== WAYBACK ARCHIVER ===
Request are sent with up to 10 parallel threads
Total urls to be sent: 785
[502, Bad Gateway] https://www.bia.gov/WhoWeAre/index.htm
[502, Bad Gateway] https://www.bia.gov/News/index.htm
[502, Bad Gateway] https://www.bia.gov/WhereIsMy/index.htm
[502, Bad Gateway] https://www.bia.gov/Calevents/index.htm

[502, Bad Gateway] https://www.bia.gov/WhoWeAre/BIA/index.htm
[502, Bad Gateway] https://www.bia.gov/WhoWeAre/AS-IA/OHSEM/index.htm
[200, OK] https://www.bia.gov/WhoWeAre/AS-IA/CLA/index.htm
[502, Bad Gateway] https://www.bia.gov/WhoWeAre/AS-IA/OHCM/index.htm

cannot load such file -- rexml/document

Trying to archive https://wiki.algo.informatik.tu-darmstadt.de/ I get the error. I'm not into ruby, but to me it looks like the code depends on rexml and it is missing somewhere. I could be wrong, though.

<internal:/usr/share/rubygems/rubygems/core_ext/kernel_require.rb>:85:in `require': cannot load such file -- rexml/document (LoadError)                        
        from <internal:/usr/share/rubygems/rubygems/core_ext/kernel_require.rb>:85:in `require'
        from /usr/local/share/gems/gems/wayback_archiver-1.4.0/lib/wayback_archiver/sitemap.rb:1:in `<top (required)>'
        from <internal:/usr/share/rubygems/rubygems/core_ext/kernel_require.rb>:85:in `require'
        from <internal:/usr/share/rubygems/rubygems/core_ext/kernel_require.rb>:85:in `require'
        from /usr/local/share/gems/gems/wayback_archiver-1.4.0/lib/wayback_archiver/sitemapper.rb:4:in `<top (required)>'                                      
        from <internal:/usr/share/rubygems/rubygems/core_ext/kernel_require.rb>:85:in `require'
        from <internal:/usr/share/rubygems/rubygems/core_ext/kernel_require.rb>:85:in `require'
        from /usr/local/share/gems/gems/wayback_archiver-1.4.0/lib/wayback_archiver/url_collector.rb:4:in `<top (required)>'                                   
        from <internal:/usr/share/rubygems/rubygems/core_ext/kernel_require.rb>:85:in `require'
        from <internal:/usr/share/rubygems/rubygems/core_ext/kernel_require.rb>:85:in `require'
        from /usr/local/share/gems/gems/wayback_archiver-1.4.0/lib/wayback_archiver.rb:4:in `<top (required)>'
        from <internal:/usr/share/rubygems/rubygems/core_ext/kernel_require.rb>:85:in `require'
        from <internal:/usr/share/rubygems/rubygems/core_ext/kernel_require.rb>:85:in `require'
        from /usr/local/share/gems/gems/wayback_archiver-1.4.0/bin/wayback_archiver:4:in `<top (required)>'
        from /usr/local/bin/wayback_archiver:23:in `load'                      
        from /usr/local/bin/wayback_archiver:23:in `<main>'        

Follow redirects

  • Request the url parse the header and check for Location:
  • If present then send that url to the WaybackMachine instead

Will add another request for each url, but it is worth it..

Replace usage of String#match? in HttpCode

Its not added in Ruby until > 2.2 (or maybe even higher), this gem is supposed to be compatible > 2.0.

Also we should update the Travis CI config to run on Ruby 2.0.

Error in running the script

I have a Cron Job which runs this gem once a week:

My crontab -e:

# m h  dom mon dow   command
0 1 * * 1 /usr/local/bin/wayback_archiver https://tommi.space/pages-to-archive --crawl --limit=100 --verbose --log=$HOME/wayback_archiver.log && echo "\n$(date) wayback_archiver success!" >> $HOME/wayback_archiver.log

I get this error:

/usr/lib/ruby/2.5.0/rubygems/core_ext/kernel_require.rb:135:in `require': cannot load such file -- robots (LoadError)
	from /usr/lib/ruby/2.5.0/rubygems/core_ext/kernel_require.rb:135:in `rescue in require'
	from /usr/lib/ruby/2.5.0/rubygems/core_ext/kernel_require.rb:39:in `require'
	from /var/lib/gems/2.5.0/gems/wayback_archiver-1.4.0/lib/wayback_archiver/url_collector.rb:2:in `<top (required)>'
	from /usr/lib/ruby/2.5.0/rubygems/core_ext/kernel_require.rb:59:in `require'
	from /usr/lib/ruby/2.5.0/rubygems/core_ext/kernel_require.rb:59:in `require'
	from /var/lib/gems/2.5.0/gems/wayback_archiver-1.4.0/lib/wayback_archiver.rb:4:in `<top (required)>'
	from /usr/lib/ruby/2.5.0/rubygems/core_ext/kernel_require.rb:59:in `require'
	from /usr/lib/ruby/2.5.0/rubygems/core_ext/kernel_require.rb:59:in `require'
	from /var/lib/gems/2.5.0/gems/wayback_archiver-1.4.0/bin/wayback_archiver:4:in `<top (required)>'
	from /usr/local/bin/wayback_archiver:23:in `load'
	from /usr/local/bin/wayback_archiver:23:in `<main>'

what does it mean?
Could you help me fix it?

Thank you very much!

Road to v2

  • Optimize for Ruby 3.0, 2.7 is EOL March 31, 2023 so should still be supported in v2 (in my opinion)
  • #32.
  • Report functionality
    • #4 with CSV output for the CLI
    • Return some report object when used in Ruby
  • Go for backwards compatibility, might do some breaking chances if the API would be significantly improved
  • Perhaps support other archiving services, like archive.is
  • Proper configuration with configure block, WaybackArchiver.configure { |c| ... } instead of using top-level functions like WaybackArchiver.user_agent= (current configuration)
  • Consider retrying failed requests with some sort of backoff, would need to be configurable (relevant comment)
  • ... and more

Happy for any input!

Consider retrying on certain HTTP response codes, i.e 429, 502, 503

In Request#perform_request

429 Too Many Requests   - Check HTTP header for date value, if its not too long in
                          the future wait until then, otherwise return
502 Bad Gateway         - Consider waiting a few seconds, or whatever is configured
503 Service Unavailable - Sames is for 502

error while collecting URLs during crawl

wayback_archiver stops collecting URLs and throws the error below. The site I'm crawling is pretty big, not sure if that is a factor or not.

Error output below:

[200, OK] https://www.bls.gov/guide/geography/projections.htm
[200, OK] https://www.bls.gov/schedule/2017/01_sched.htm
/Users/larry/.rvm/gems/ruby-2.4.0/gems/site_mapper-0.0.13/lib/site_mapper/crawler.rb:60:in `rescue in collect_urls': uninitialized constant SiteMapper::Crawler::IRB (NameError)
	from /Users/larry/.rvm/gems/ruby-2.4.0/gems/site_mapper-0.0.13/lib/site_mapper/crawler.rb:49:in `collect_urls'
	from /Users/larry/.rvm/gems/ruby-2.4.0/gems/site_mapper-0.0.13/lib/site_mapper/crawler.rb:35:in `collect_urls'
	from /Users/larry/.rvm/gems/ruby-2.4.0/gems/site_mapper-0.0.13/lib/site_mapper.rb:35:in `map'
	from /Users/larry/.rvm/gems/ruby-2.4.0/gems/wayback_archiver-0.0.11/lib/wayback_archiver/url_collector.rb:22:in `crawl'
	from /Users/larry/.rvm/gems/ruby-2.4.0/gems/wayback_archiver-0.0.11/lib/wayback_archiver.rb:32:in `archive'
	from /Users/larry/.rvm/gems/ruby-2.4.0/gems/wayback_archiver-0.0.11/bin/wayback_archiver:9:in `<top (required)>'
	from /Users/larry/.rvm/gems/ruby-2.4.0/bin/wayback_archiver:22:in `load'
	from /Users/larry/.rvm/gems/ruby-2.4.0/bin/wayback_archiver:22:in `<main>'
	from /Users/larry/.rvm/gems/ruby-2.4.0/bin/ruby_executable_hooks:15:in `eval'
	from /Users/larry/.rvm/gems/ruby-2.4.0/bin/ruby_executable_hooks:15:in `<main>'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.