Coder Social home page Coder Social logo

zerobyte4701 / autocrawler_google_url Goto Github PK

View Code? Open in Web Editor NEW

This project forked from cuqs/autocrawler_google_url

0.0 0.0 0.0 26.64 MB

Split the search part and download part, and only search from google

License: Apache License 2.0

Python 100.00%

autocrawler_google_url's Introduction

AutoCrawler_google_url

  • Modified from AutoCrawler
  • Split the search part and download part, and only search from google

How to use

  1. Install Chrome

  2. pip install -r requirements.txt

  3. Write search keywords in keywords_collect/name_meta.json

  4. Permission

    chmod 755 chromedriver/*
  5. Run "main.py"

    python3 main.py [--skip true] [--threads 4] [--face false] [--no_gui auto] [--limit 0]
    # example
    python main.py --skip true --threads 2 --face false --no_gui auto --limit 0
  6. URL links will be downloaded to 'collected_links' directory.

  7. Run "check_collected.py"

    python check_collected.py

  8. Run "download_links.py"

    python download_links.py --download_all

    • Download single keyword

    python download_links.py --download_single p2

Arguments

--skip true              Skips keyword if downloaded directory already exists. This is needed
                         when re-downloading.

--threads 4              Number of threads to download.

--face false             Face search mode

--no_gui auto            No GUI mode. (headless mode) Acceleration for full_resolution mode,
                         but unstable on thumbnail mode.
                         Default: "auto" - false if full=false, true if full=true
                         (can be used for docker linux system)
                   
--limit 0                Maximum count of images to download per site. (0: infinite)
--proxy-list ''          The comma separated proxy list like: "socks://127.0.0.1:1080,http://127.0.0.1:1081".
                         Every thread will randomly choose one from the list.
--print_url false        print download process with url      

Remote crawling through SSH on your server

sudo apt-get install xvfb <- This is virtual display

sudo apt-get install screen <- This will allow you to close SSH terminal while running.

screen -S s1

Xvfb :99 -ac & DISPLAY=:99 python3 main.py

Customize

You can make your own crawler by changing collect_links.py

Issues

As google site consistently changes, please make issues if it doesn't work.

autocrawler_google_url's People

Contributors

cjw2021 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.