Coder Social home page Coder Social logo

dyweb / scrala Goto Github PK

View Code? Open in Web Editor NEW
112.0 12.0 24.0 85 KB

Unmaintained :whale: :coffee: :spider: Scala crawler(spider) framework, inspired by scrapy, created by @gaocegege

Home Page: http://dongyueweb.com/scrala/

Scala 100.00%
scrapy docker scala spider actor-model

scrala's Introduction

scrala

Codacy Badge Build Status License scrala published Docker Pulls Join the chat at https://gitter.im/gaocegege/scrala

scrala is a web crawling framework for scala, which is inspired by scrapy.

Installation

From Docker

gaocegege/scrala in dockerhub

Create a Dockerfile in your project.

FROM gaocegege/scrala:latest

// COPY the build.sbt and the src to the container

Run a single command in docker

docker run -v <your src>:/app/src -v <your ivy2 directory>:/root/.ivy2  gaocegege/scrala

From SBT

Step 1. Add it in your build.sbt at the end of resolvers:

resolvers += "jitpack" at "https://jitpack.io"

Step 2. Add the dependency

libraryDependencies += "com.github.gaocegege" % "scrala" % "0.1.5"

From Source Code

git clone https://github.com/gaocegege/scrala.git
cd ./scrala
sbt assembly

You will get the jar in ./target/scala-<version>/.

Example

import com.gaocegege.scrala.core.spider.impl.DefaultSpider
import com.gaocegege.scrala.core.common.response.Response
import java.io.BufferedReader
import java.io.InputStreamReader
import com.gaocegege.scrala.core.common.response.impl.HttpResponse
import com.gaocegege.scrala.core.common.response.impl.HttpResponse

class TestSpider extends DefaultSpider {
  def startUrl = List[String]("http://www.gaocegege.com/resume")

  def parse(response: HttpResponse): Unit = {
    val links = (response getContentParser) select ("a")
    for (i <- 0 to links.size() - 1) {
      request(((links get (i)) attr ("href")), printIt)
    }
  }

  def printIt(response: HttpResponse): Unit = {
    println((response getContentParser) title)
  }
}

object Main {
  def main(args: Array[String]) {
    val test = new TestSpider
    test begin
  }
}

Just like the scrapy, what you need to do is define a startUrl to tell me where to start, and override parse(...) to parse the response of the startUrl. And request(...) function is like yield scrapy.Request(...) in scrapy.

You can get the example project in the ./example/

For Developer

scrala is under active development, feel free to contribute documentation, test cases, pull requests, issues, and anything you want. I'm a newcomer to scala so the code is hard to read. I'm glad to see someone familiar with scala coding standards could do some code reviews for the repo :)

scrala's People

Contributors

gaocegege avatar gitter-badger avatar waffle-iron avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

scrala's Issues

The halting problem

对于爬虫不知道自己什么时候停止的问题,产生的原因是因为现在过于简单粗暴的调度。之后可能需要更加保证时序的调度方法来实现。

现在的做法是每次请求来了,无论多少个先都给downloaderManager来处理,而downloaderManager又不知道自己会不会继续收到来自Engine的请求,所以整个系统没一个东西知道啥时候程序需要停止。

之后希望改下,由downloaderManager来主动push请求,每次只获得一个。感觉这样可以解决这个问题。还需要再思考下。。

Documentation and Testing

Now scrala is just a simple framework, but it still needs corresponding documentation and test cases.

Language context sensitive crawling

Language context sensitive crawling. I have a webapp which i would like to crawl and to index using e.g. Lucene but in this use-case I would prefer to do two crawlings or to segregate the crawling by language in some way.

how do you cover this use-case or otherwise how would you incorporate it? Basically the language will be on the top of the response page e.g.

<!DOCTYPE html>
<html lang="es">
...
</html>

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.