Coder Social home page Coder Social logo

pylinkchecker's Introduction

pylinkchecker

Version

0.2pre

pylinkchecker is a standalone and pure python link checker and crawler that traverses a web site and reports errors (e.g., 500 and 404 errors) encountered. The crawler can also download resources such as images, scripts and stylesheets.

pylinkchecker's performance can be improved by installing additional libraries that require a C compiler, but these libraries are optional.

We created pylinkchecker so that it could be executed in environments without access to a compiler (e.g., Microsoft Windows, some *nix production environments) or with an old version of python (e.g., Centos).

pylinkchecker is highly modular and has many configuration options, but the only required parameter is the starting url: pylinkcheck.py http://www.example.com/

pylinkchecker can also be used programmatically by calling one of the functions in pylinkchecker.api

image

Quick Start

Install pylinkchecker with pip or easy_install:

pip install pylinkchecker

Crawl all pages from a site and show progress:

pylinkcheck.py -P http://www.example.com/

Requirements

pylinkchecker does not require external libraries if executed with python 2.x. It requires beautifulsoup4 if executed with python 3.x. It has been tested on python 2.6, python 2.7, and python 3.3.

For production use, it is strongly recommended to use lxml or html5lib because the default HTML parser provided by python is not very lenient.

Optional Requirements

These libraries can be installed to enable certain modes in pylinkchecker:

lxml

beautifulsoup can use lxml to speed up the parsing of HTML pages. Because lxml requires C libraries, this is only an optional requirement.

html5lib

beautifulsoup can use html5lib to process incorrect or strange markup. It is slower than lxml, but believed to be more lenient.

gevent

this non-blocking io library enables pylinkchecker to use green threads instead of processes or threads. gevent could potentially speed up the crawling speed on web sites with many small pages.

cchardet

this library speeds up the detection of document encoding.

Usage

This is a list of all available options. See the end of the README file for usage examples.

Usage: pylinkcheck.py [options] URL ...

Options:
  --version             show program's version number and exit
  -h, --help            show this help message and exit
  -V VERBOSE, --verbose=VERBOSE

  Crawler Options:
    These options modify the way the crawler traverses the site.

    -O, --test-outside  fetch resources from other domains without crawling
                        them
    -H ACCEPTED_HOSTS, --accepted-hosts=ACCEPTED_HOSTS
                        comma-separated list of additional hosts to crawl
                        (e.g., example.com,subdomain.another.com)
    -i IGNORED_PREFIXES, --ignore=IGNORED_PREFIXES
                        comma-separated list of host/path prefixes to ignore
                        (e.g., www.example.com/ignore_this_and_after/)
    -u USERNAME, --username=USERNAME
                        username to use with basic HTTP authentication
    -p PASSWORD, --password=PASSWORD
                        password to use with basic HTTP authentication
    -t TYPES, --types=TYPES
                        Comma-separated values of tags to look for when
                        crawlinga site. Default (and supported types):
                        a,img,link,script
    -T TIMEOUT, --timeout=TIMEOUT
                        Seconds to wait before considering that a page timed
                        out
    -C, --strict        Does not strip href and src attributes from
                        whitespaces
    -P, --progress      Prints crawler progress in the console
    -N, --run-once      Only crawl the first page.
    -S, --show-source   Show source of links (html) in the report.

  Performance Options:
    These options can impact the performance of the crawler.

    -w WORKERS, --workers=WORKERS
                        Number of workers to spawn
    -m MODE, --mode=MODE
                        Types of workers: thread (default), process, or green
    -R PARSER, --parser=PARSER
                        Types of HTML parse: html.parser (default) or lxml

  Output Options:
    These options change the output of the crawler.

    -f FORMAT, --format=FORMAT
                        Format of the report: plain
    -o OUTPUT, --output=OUTPUT
                        Path of the file where the report will be printed.
    -W WHEN, --when=WHEN
                        When to print the report. error (only if a
                        crawling error occurs) or always (default)
    -E REPORT_TYPE, --report-type=REPORT_TYPE
                        Type of report to print: errors (default, summary and
                        erroneous links), summary, all (summary and all links)
    -c, --console       Prints report to the console in addition to other
                        output options such as file or email.

  Email Options:
    These options allows the crawler to send a report by email.

    -a ADDRESS, --address=ADDRESS
                        Comma-separated list of email addresses used to send a
                        report
    --from=FROM_ADDRESS
                        Email address to use in the from field of the email
                        (optional)
    -s SMTP, --smtp=SMTP
                        Host of the smtp server
    --port=PORT         Port of the smtp server (optional)
    --tls               Use TLS with the email server.
    --subject=SUBJECT   Subject of the email (optional)
    --smtp-username=SMTP_USERNAME
                        Username to use with the smtp server (optional)
    --smtp-password=SMTP_PASSWORD
                        Password to use with the smtp server (optional)

Usage Example

Crawl a site and show progress

pylinkcheck.py --progress http://example.com/

Crawl a site starting from 2 URLs

pylinkcheck.py http://example.com/ http://example2.com/

Crawl a site (example.com) and all pages belonging to another host

pylinkcheck.py -H additionalhost.com http://example.com/

Report status of all links (even successful ones)

pylinkcheck.py --report-type=all http://example.com/

Report status of all links and HTML show source of these links

pylinkcheck.py --report-type=all --show-source http://example.com/

Only crawl starting URLs and access all linked resources

pylinkcheck.py --run-once http://example.com/

Only access links (a href) and ignore images, stylesheets and scripts

pylinkcheck.py --types=a http://example.com/

Crawl a site with 4 threads (default is one thread)

pylinkcheck.py --workers=4 http://example.com/

Crawl a site with 4 processes (default is one thread)

pylinkcheck.py --mode=process --workers=4 http://example.com/

Crawl a site and use LXML to parse HTML (faster, must be installed)

pylinkcheck.py --parser=LXML http://example.com/

Print debugging info

pylinkcheck.py --verbose=2 http://example.com/

API Usage

To crawl a site from a single URL:

from pylinkchecker.api import crawl
crawled_site = crawl("http://www.example.com/")
number_of_crawled_pages = len(crawled_site.pages)
number_of_errors = len(crawled_sites.error_pages)

To crawl a site and pass some configuration options (the same supported by the command line interface):

from pylinkchecker.api import crawl_with_options
crawled_site = crawl_with_options(["http://www.example.com/"], {"run-once":
    True, "workers": 10})
number_of_crawled_pages = len(crawled_site.pages)
number_of_errors = len(crawled_sites.error_pages)

FAQ and Troubleshooting

I cannot find pylinkcheck.py on Windows with virtualenv

This is a known problem with virtualenv on windows. The interpreter is different than the one used by the virtualenv. Prefix pylinkcheck.py with the full path: python c:\myvirtualenv\Scripts\pylinkcheck.py

I see Exception KeyError ... module 'threading' when using --mode=green

This output is generally harmless and is generated by gevent patching the python thread module. If someone knows how to make it go away, patches are more than welcome :-)

License

This software is licensed under the New BSD License. See the LICENSE file in the for the full license text. It includes the beautifulsoup library which is licensed under the MIT license.

pylinkchecker's People

Contributors

dorivard avatar jerryker avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pylinkchecker's Issues

Doesn't Seem to Crawl

I'm probably missing something obvious. When I run:
pylinkcheck.py -P localhost:6464 I only see output for links and images on the index page and not output from the pages it links to. Does this run recursively? Am I missing an option?

providing an API to use pylinkchecker as a python module

It could be nice to have a public API, so pylinkchecker could be easily used as a python module.

For example:
import pylinkchecker

checker = pylinkchecker.Checker("http://website.tld")
checker.crawl() #do requests and parse responses
checker.errors() #get error requests (4xx, 5xx)
checker.success() #get 2xx requests
checker.errors().to_html() #return a string of html of the error requests

(This is just a draft.)

Hammer prevention

I can't find anything obvious in the docs for this, but any built-in functionality to control how fast it moves from URL to the next? Basically a small delay to prevent "hammering" a server?

Wheel or source pushed to PyPi

Any chance on getting a wheel and/or source pushed to PyPi so we can install with pip?

python setup.py sdist bdist_wheel upload

Thanks!

Recursion/depth level

Is it possible to restrict this from scanning too deeply into a site?

If I wanted to hack this in myself where would I start looking?

Thanks!

Ignore Telephone Links

Is there a way to enable the linkchecker to ignore telephone links? For a site with the following link:

<a href="tel:18002524793"><span>Assisted Living<br>Sales Office</span>1-800-252-4793</a>

The linkchecker attempts to crawl http://www.theosborn.org/tel:18006732926 which returns 404. The sites my company run have multiple telephone links. This site in particular has 6 telephone links in a sidebar that renders on every single page, which results in quite a few false positives:

ERROR Crawled 1049 urls with 504 error(s) in 126.18 seconds

Add sphinx documentation

Now that we have an API, it makes sense to get more documentation than the single README file (which is already quite heavy).

It might be a good time to use read the docs.

Add a strict mode

If in strict mode, take the url as is. Otherwise, strip it of leading and trailing whitespaces.

In the future, we may do more url transformations if not in strict mode to ensure that we treat the page like a browser.

Python3 support missing

(venv) jannek@jannek-P720:/ssd/pylinkchecker$ python setup.py install
running install
/ssd/pylinkchecker/venv/lib/python3.10/site-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
  warnings.warn(
/ssd/pylinkchecker/venv/lib/python3.10/site-packages/setuptools/command/easy_install.py:144: EasyInstallDeprecationWarning: easy_install command is deprecated. Use build and pip and other standards-based tools.
  warnings.warn(
running bdist_egg
running egg_info
creating pylinkchecker.egg-info
writing pylinkchecker.egg-info/PKG-INFO
writing dependency_links to pylinkchecker.egg-info/dependency_links.txt
writing requirements to pylinkchecker.egg-info/requires.txt
writing top-level names to pylinkchecker.egg-info/top_level.txt
writing manifest file 'pylinkchecker.egg-info/SOURCES.txt'
reading manifest file 'pylinkchecker.egg-info/SOURCES.txt'
adding license file 'LICENSE.txt'
writing manifest file 'pylinkchecker.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_py
creating build
creating build/lib
creating build/lib/pylinkchecker
copying pylinkchecker/compat.py -> build/lib/pylinkchecker
copying pylinkchecker/__init__.py -> build/lib/pylinkchecker
copying pylinkchecker/api.py -> build/lib/pylinkchecker
copying pylinkchecker/crawler.py -> build/lib/pylinkchecker
copying pylinkchecker/tests.py -> build/lib/pylinkchecker
copying pylinkchecker/models.py -> build/lib/pylinkchecker
copying pylinkchecker/reporter.py -> build/lib/pylinkchecker
copying pylinkchecker/urlutil.py -> build/lib/pylinkchecker
creating build/lib/pylinkchecker/bs4
copying pylinkchecker/bs4/diagnose.py -> build/lib/pylinkchecker/bs4
copying pylinkchecker/bs4/dammit.py -> build/lib/pylinkchecker/bs4
copying pylinkchecker/bs4/__init__.py -> build/lib/pylinkchecker/bs4
copying pylinkchecker/bs4/element.py -> build/lib/pylinkchecker/bs4
creating build/lib/pylinkchecker/bs4/builder
copying pylinkchecker/bs4/builder/__init__.py -> build/lib/pylinkchecker/bs4/builder
copying pylinkchecker/bs4/builder/_htmlparser.py -> build/lib/pylinkchecker/bs4/builder
copying pylinkchecker/bs4/builder/_lxml.py -> build/lib/pylinkchecker/bs4/builder
copying pylinkchecker/bs4/builder/_html5lib.py -> build/lib/pylinkchecker/bs4/builder
creating build/bdist.linux-x86_64
creating build/bdist.linux-x86_64/egg
creating build/bdist.linux-x86_64/egg/pylinkchecker
copying build/lib/pylinkchecker/compat.py -> build/bdist.linux-x86_64/egg/pylinkchecker
copying build/lib/pylinkchecker/__init__.py -> build/bdist.linux-x86_64/egg/pylinkchecker
copying build/lib/pylinkchecker/api.py -> build/bdist.linux-x86_64/egg/pylinkchecker
copying build/lib/pylinkchecker/crawler.py -> build/bdist.linux-x86_64/egg/pylinkchecker
copying build/lib/pylinkchecker/tests.py -> build/bdist.linux-x86_64/egg/pylinkchecker
creating build/bdist.linux-x86_64/egg/pylinkchecker/bs4
copying build/lib/pylinkchecker/bs4/diagnose.py -> build/bdist.linux-x86_64/egg/pylinkchecker/bs4
copying build/lib/pylinkchecker/bs4/dammit.py -> build/bdist.linux-x86_64/egg/pylinkchecker/bs4
copying build/lib/pylinkchecker/bs4/__init__.py -> build/bdist.linux-x86_64/egg/pylinkchecker/bs4
creating build/bdist.linux-x86_64/egg/pylinkchecker/bs4/builder
copying build/lib/pylinkchecker/bs4/builder/__init__.py -> build/bdist.linux-x86_64/egg/pylinkchecker/bs4/builder
copying build/lib/pylinkchecker/bs4/builder/_htmlparser.py -> build/bdist.linux-x86_64/egg/pylinkchecker/bs4/builder
copying build/lib/pylinkchecker/bs4/builder/_lxml.py -> build/bdist.linux-x86_64/egg/pylinkchecker/bs4/builder
copying build/lib/pylinkchecker/bs4/builder/_html5lib.py -> build/bdist.linux-x86_64/egg/pylinkchecker/bs4/builder
copying build/lib/pylinkchecker/bs4/element.py -> build/bdist.linux-x86_64/egg/pylinkchecker/bs4
copying build/lib/pylinkchecker/models.py -> build/bdist.linux-x86_64/egg/pylinkchecker
copying build/lib/pylinkchecker/reporter.py -> build/bdist.linux-x86_64/egg/pylinkchecker
copying build/lib/pylinkchecker/urlutil.py -> build/bdist.linux-x86_64/egg/pylinkchecker
byte-compiling build/bdist.linux-x86_64/egg/pylinkchecker/compat.py to compat.cpython-310.pyc
byte-compiling build/bdist.linux-x86_64/egg/pylinkchecker/__init__.py to __init__.cpython-310.pyc
byte-compiling build/bdist.linux-x86_64/egg/pylinkchecker/api.py to api.cpython-310.pyc
byte-compiling build/bdist.linux-x86_64/egg/pylinkchecker/crawler.py to crawler.cpython-310.pyc
byte-compiling build/bdist.linux-x86_64/egg/pylinkchecker/tests.py to tests.cpython-310.pyc
byte-compiling build/bdist.linux-x86_64/egg/pylinkchecker/bs4/diagnose.py to diagnose.cpython-310.pyc
  File "build/bdist.linux-x86_64/egg/pylinkchecker/bs4/diagnose.py", line 20
    print "Diagnostic running on Beautiful Soup %s" % __version__
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
SyntaxError: Missing parentheses in call to 'print'. Did you mean print(...)?

byte-compiling build/bdist.linux-x86_64/egg/pylinkchecker/bs4/dammit.py to dammit.cpython-310.pyc
byte-compiling build/bdist.linux-x86_64/egg/pylinkchecker/bs4/__init__.py to __init__.cpython-310.pyc
byte-compiling build/bdist.linux-x86_64/egg/pylinkchecker/bs4/builder/__init__.py to __init__.cpython-310.pyc
byte-compiling build/bdist.linux-x86_64/egg/pylinkchecker/bs4/builder/_htmlparser.py to _htmlparser.cpython-310.pyc
  File "build/bdist.linux-x86_64/egg/pylinkchecker/bs4/builder/_htmlparser.py", line 71
    except (ValueError, OverflowError), e:
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
SyntaxError: multiple exception types must be parenthesized

byte-compiling build/bdist.linux-x86_64/egg/pylinkchecker/bs4/builder/_lxml.py to _lxml.cpython-310.pyc
byte-compiling build/bdist.linux-x86_64/egg/pylinkchecker/bs4/builder/_html5lib.py to _html5lib.cpython-310.pyc
byte-compiling build/bdist.linux-x86_64/egg/pylinkchecker/bs4/element.py to element.cpython-310.pyc
  File "build/bdist.linux-x86_64/egg/pylinkchecker/bs4/element.py", line 1204
    print 'Running CSS selector "%s"' % selector
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
SyntaxError: Missing parentheses in call to 'print'. Did you mean print(...)?

byte-compiling build/bdist.linux-x86_64/egg/pylinkchecker/models.py to models.cpython-310.pyc
byte-compiling build/bdist.linux-x86_64/egg/pylinkchecker/reporter.py to reporter.cpython-310.pyc
byte-compiling build/bdist.linux-x86_64/egg/pylinkchecker/urlutil.py to urlutil.cpython-310.pyc
creating build/bdist.linux-x86_64/egg/EGG-INFO
installing scripts to build/bdist.linux-x86_64/egg/EGG-INFO/scripts
running install_scripts
running build_scripts
creating build/scripts-3.10
copying and adjusting pylinkchecker/bin/pylinkcheck.py -> build/scripts-3.10
changing mode of build/scripts-3.10/pylinkcheck.py from 664 to 775
creating build/bdist.linux-x86_64/egg/EGG-INFO/scripts
copying build/scripts-3.10/pylinkcheck.py -> build/bdist.linux-x86_64/egg/EGG-INFO/scripts
changing mode of build/bdist.linux-x86_64/egg/EGG-INFO/scripts/pylinkcheck.py to 775
copying pylinkchecker.egg-info/PKG-INFO -> build/bdist.linux-x86_64/egg/EGG-INFO
copying pylinkchecker.egg-info/SOURCES.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
copying pylinkchecker.egg-info/dependency_links.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
copying pylinkchecker.egg-info/requires.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
copying pylinkchecker.egg-info/top_level.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
zip_safe flag not set; analyzing archive contents...
pylinkchecker.__pycache__.tests.cpython-310: module references __file__
creating dist
creating 'dist/pylinkchecker-0.2-py3.10.egg' and adding 'build/bdist.linux-x86_64/egg' to it
removing 'build/bdist.linux-x86_64/egg' (and everything under it)
Processing pylinkchecker-0.2-py3.10.egg
creating /ssd/pylinkchecker/venv/lib/python3.10/site-packages/pylinkchecker-0.2-py3.10.egg
Extracting pylinkchecker-0.2-py3.10.egg to /ssd/pylinkchecker/venv/lib/python3.10/site-packages
  File "/ssd/pylinkchecker/venv/lib/python3.10/site-packages/pylinkchecker-0.2-py3.10.egg/pylinkchecker/bs4/diagnose.py", line 20
    print "Diagnostic running on Beautiful Soup %s" % __version__
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
SyntaxError: Missing parentheses in call to 'print'. Did you mean print(...)?

  File "/ssd/pylinkchecker/venv/lib/python3.10/site-packages/pylinkchecker-0.2-py3.10.egg/pylinkchecker/bs4/element.py", line 1204
    print 'Running CSS selector "%s"' % selector
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
SyntaxError: Missing parentheses in call to 'print'. Did you mean print(...)?

  File "/ssd/pylinkchecker/venv/lib/python3.10/site-packages/pylinkchecker-0.2-py3.10.egg/pylinkchecker/bs4/builder/_htmlparser.py", line 71
    except (ValueError, OverflowError), e:
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
SyntaxError: multiple exception types must be parenthesized

Very likely that some dependency needs now updating?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.