Coder Social home page Coder Social logo

scrawler's Introduction

S(cala) Crawler

Build Status Maven Central

Install

libraryDependencies += "com.marekkadek" %% "scrawler" % "0.0.3"

Library cross compiles for Scala 2.11 and 2.12.

Usage

Crawlers

You can create your specific crawler by subclassing Crawler class. Lets see how would it look, for a crawler who's effects (crawling web) are captured by fs2.Task and that gives us data only in form of String. Let's make a crawler that follows every https link and gives us url's of websites.

class MyCrawler extends Crawler[Task, String](Seq(JsoupBrowser[Task])) {
  override protected def onDocument(document: Document): Stream[Task, Yield[String]] = {
      val title = YieldData(document.location)
      val followableLinks = document.root
        .select("a[href^='https://']")  // follow only links starting by https
        .toSeq
        .flatMap(_.attr("href")) // get the href attribute from link
        .map(Visit) // visit those links

      // first yield title of website as data, and then continue by visiting links
      Stream.emit(title) ++ Stream.emits(followableLinks)
  }
}

We are streaming actions such as YieldData and Visit, which are currently only two allowed. Here's how Yield is defined:

sealed trait Yield[+A]
final case class YieldData[A](a: A) extends Yield[A]
final case class Visit(url: String) extends Yield[Nothing]

We can execute either sequential or parallel crawling.

val crawler = new MyCrawler

// crawl wikipedia sequentially and take 10 elements (titles of visited websites)
val titles: Vector[String] = crawler.sequentialCrawl("https://wikipedia.org")
    .take(10).runLog.unsafeRun

// crawl wikipedia in parallel and take 10 elements(titles of visited websites)
implicit val strategy: Strategy = Strategy.fromFixedDaemonPool(128)
val titles2: Vector[String] = crawler.parallelCrawl("https://wikipedia.org", maxConnections = 8)
    .take(10).runLog.unsafeRun

You might as well pipe them into file or kafka or anything that is happy with fs2 :)

As observed in example when extending Crawler, it takes sequence of browsers to use during crawling. By default, it randomly selects which browser to use. You can change this behaviour by overriding pickBrowser method.

class MyCrawler extends Crawler[Task, String](Seq(JsoupBrowser[Task])) {
  override protected def onDocument(document: Document): Stream[Task, Yield[String]] = ???

  // picking browser may be effectful
  override protected def pickBrowser(forUrl: String): Task[Browser[Task]] = ???
 }

Browsers

Any browser that implements Browser trait can be used. Currently, there is JsoupBrowser, and HtmlUnit (work in progress).

To create JsoupBrowser, you can use JsoupBrowser[Task] (or different effect if you're not using Task). It has several overloads, i.e. you can also pass in proxy (or user agent or so):

val proxy = ProxySettings.http("122.193.14.106", 81)
val browser = JsoupBrowser[Task](proxy)
val browser2 = JsoupBrowser[Task](connectionTimeout = 5.seconds,
    userAgent = "Mozilla",
    validateTLSCertificates = false)

Credits

Greatly inspired by awesome [https://github.com/ruippeixotog/scala-scraper](Rui's scala-scraper) and python's Scrapy. Thank you!

scrawler's People

Contributors

kadekm avatar visox avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

scrawler's Issues

Retrying

Request may (and will fail) on unrelated errors. There should be configurable mechanism for retrying.

Performance tests

  • it would be good to see how are sequential and parallel crawlers doing... although for web requests it's gonna be IO bounded anyway ๐Ÿ‘Ž
  • third parties won't be happy if we use them for testing, and the results won't be telling us much anyway... or will they? Need to think about it. ๐Ÿ‘Ž
  • first goal should be ability to observe changes on macro level, at least rough idea if parallel crawling is still parallel and if it's still faster than sequential etc., some general sense of non-brokeness ๐Ÿ‘

Unit tests

This should be obvious but since the api is nowhere near being stable yet...

Typed CSS selectors

Use case: i.e.: "a[href]" selects only such links that always have attr("href"), so it should no longer return Option[String].

parallelCrawl emits visits/stream-results in one chunk

code to reproduce

import com.marekkadek.scraper.Document
import com.marekkadek.scraper.jsoup.JsoupBrowser
import com.marekkadek.scrawler.crawlers.{Visit, YieldData, Yield, Crawler}
import fs2.{Strategy, Stream, Task}
import scala.concurrent.duration._

class BadCrawler extends Crawler[Task, Int](Seq(JsoupBrowser[Task](
  connectionTimeout = 20 seconds
))) {

  var visited = 0

  override protected def onDocument(document: Document): Stream[Task, Yield[Int]] = {
    val visit = (1 to 10).map{_ =>
      Visit("http://example.com/")
    }

    visited = visited + 1

    println(s"visited: $visited")

    Stream.emit(YieldData(visited)) ++ Stream.emits(visit)
  }
}

object BadCrawler extends App {
  implicit val strategy: Strategy = Strategy.fromFixedDaemonPool(100)

  val crawler = new BadCrawler()

  val stream: Stream[Task, Int] = crawler.parallelCrawl("http://example.com/", maxConnections = 10)

  stream
    .map{result =>
      println(s"result: $result")
      result
    }
    .runLog
    .unsafeRun()

}

Once run, the output is like this

visited: 1
result: 1
visited: 2
visited: 3
visited: 4
visited: 5
visited: 6
visited: 7
visited: 8
visited: 9
visited: 10
visited: 11
result: 2
result: 3
result: 4
result: 5
result: 6
result: 7
result: 8
result: 9
result: 10
result: 11
visited: 12
...
visited: 111
result: 12
...
result: 111
visited: 112
// FOR SOME TIME NOTHING 
...
visited: 1111
result: 112
...
result: 1111
// NOTHING HAPPENS (only after quite some time)

I dont mind that 10 visits need to happen before i get 10 results but if there are more pages to be visited (then maxConnection) the behavior is lagging. Both visited/result output appear suddenly after some long evaluation.

It would be desired to emit results as soon as they are available.

Right now, privately to overcome this problem, i store the toVisit urls collection and provide the urls in a managed size in onDocument that way i have to wait only for the next 10 results

Code coverage

Publish code coverage with each commit (and on PRs)

Readme basic usage

  • how to install it
  • how to create simple crawler
  • how to store result in file / json
  • how to compose crawlers

Telnet into running crawler

  • ability to telnet into running crawler
  • ability to observe what's currently happening with crawler
  • ability to kill/cancel/control(?) crawler

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.