Coder Social home page Coder Social logo

Comments (15)

srowen avatar srowen commented on July 28, 2024

You're mixing the code from the book with the code from the repo. I think @sryza intended to show a simplified version in the text; in the repo it's refactored a bit so that it works better and faster as a whole. The text appears correct and consistent with itself.

To be clear, you do not import the code in the repo in order to use the book listings.

from aas.

Deepak-Vohra avatar Deepak-Vohra commented on July 28, 2024

Ch 6 is not by you, noted.
But, with just the ch code, including the import statement for Java conversions, does not run.

from aas.

srowen avatar srowen commented on July 28, 2024

No, this error is not from the book listing. You can see it is defined to take two arguments:

def plainTextToLemmas(text: String, stopWords: Set[String])
  : Seq[String] = {
...

You are showing the declaration from this repo.

from aas.

Deepak-Vohra avatar Deepak-Vohra commented on July 28, 2024

Not just the import scala.collection.JavaConversions._ which is to be added, the following are to be added.

import scala.collection.mutable.ArrayBuffer
import java.util.Properties
import scala.collection.JavaConversions._

Also,
isOnlyLetters is used in plainTextToLemmas before listing or even mentioning isOnlyLetters.

Even if these are added and plainTextToLemmas is run, the following statement still generates an error.

val lemmatized = plainText.map(plainTextToLemmas(_, stopWords))

The error is

scala> val lemmatized = plainText.map(plainTextToLemmas(_, stopWords))
<console>:56: error: type mismatch;
 found   : (String, String)
 required: String
       val lemmatized = plainText.map(plainTextToLemmas(_, stopWords))

Have not used any code from repo other than the import statements and the isOnlyLetters.

from aas.

Deepak-Vohra avatar Deepak-Vohra commented on July 28, 2024

The error is in 2015-01-21: Early release revision 3.

from aas.

srowen avatar srowen commented on July 28, 2024

Yes, the imports are fixed in the next draft. isOnlyLetters is actually not in the book. I assume @sryza thought it was self-explanatory but it does break the ability to use it as-is in the shell. I think that can be added to the listing. The type of plainText doesn't match, yes. I think @sryza will have to have a look at that next week and maybe copy this part more directly from the repo:

    val lemmatized = plainText.mapPartitions(iter => {
      val pipeline = createNLPPipeline()
      iter.map{ case(title, contents) => (title, plainTextToLemmas(contents, stopWords, pipeline))}
    })

from aas.

Deepak-Vohra avatar Deepak-Vohra commented on July 28, 2024

The following would need to be run before lemmatized is defined.

def loadStopWords(path: String) = scala.io.Source.fromFile(path).getLines().toSet
val stopWords = sc.broadcast(loadStopWords("stopwords.txt")).value

def createNLPPipeline(): StanfordCoreNLP = {
    val props = new Properties()
    props.put("annotators", "tokenize, ssplit, pos, lemma")
    new StanfordCoreNLP(props)
  }

from aas.

srowen avatar srowen commented on July 28, 2024

This part does not necessarily have to change. But something has to change about the definition of lemmatized in the book listing. Sandy can decide.

from aas.

sryza avatar sryza commented on July 28, 2024

Changing the book to include loading the stop words, include the definition is isOnlyLetters, and fix the type of lemmatized.

The new listing will look like:

import edu.stanford.nlp.pipeline._
import edu.stanford.nlp.ling.CoreAnnotations._

def createNLPPipeline(): StanfordCoreNLP = {
  val props = new Properties()
  props.put("annotators", "tokenize, ssplit, pos, lemma")
  new StanfordCoreNLP(props)
}

def isOnlyLetters(str: String): Boolean = {
  str.forall(c => Character.isLetter(c))
}

def plainTextToLemmas(text: String, stopWords: Set[String],
    pipeline: StanfordCoreNLP): Seq[String] = {
  val doc = new Annotation(text)
  pipeline.annotate(doc)

  val lemmas = new ArrayBuffer[String]()
  val sentences = doc.get(classOf[SentencesAnnotation])
  for (sentence <- sentences;
      token <- sentence.get(classOf[TokensAnnotation])) {
    val lemma = token.get(classOf[LemmaAnnotation])
    if (lemma.length > 2 && !stopWords.contains(lemma)
        && isOnlyLetters(lemma)) {
      lemmas += lemma.toLowerCase
    }
  }
  lemmas
}

val stopWords = sc.broadcast(
  scala.io.Source.fromFile("stopwords.txt).getLines().toSet).value

val lemmatized: RDD[Seq[String]] = plainText.mapPartitions(it => {
  val pipeline = createNLPPipeline()
  it.map { case(title, contents) =>
    plainTextToLemmas(contents, stopWords, pipeline)
  }
})

Thanks for reporting this @dvohra.

from aas.

Deepak-Vohra avatar Deepak-Vohra commented on July 28, 2024

Thanks for fixing the ch 6 code sryza.

from aas.

tonychuo avatar tonychuo commented on July 28, 2024

Tried to follow @sryza 10 Mar. But where is "stopwords.txt"? It threw a exception:
java.io.FileNotFoundException: stopwords.txt

from aas.

srowen avatar srowen commented on July 28, 2024

The file is here: https://github.com/sryza/aas/blob/master/ch06-lsa/src/main/resources/stopwords.txt

You can download it and put it in in your local working directory. (Or anywhere you like locally and change that path.)

@sryza is this worth a little errata to note where the file is supposed to come from in the text?

from aas.

sryza avatar sryza commented on July 28, 2024

@srowen yes, will make it more explicit.

from aas.

Deepak-Vohra avatar Deepak-Vohra commented on July 28, 2024

As all resource files are from the aas github ( https://github.com/sryza/aas) would be more suitable add a general statement rather than a per file statement.

from aas.

CheungZeeCn avatar CheungZeeCn commented on July 28, 2024

@sryza I use the codes like yours, however there are sth. strange to me happended,
I don't know why it will turn out to be like this, plz help.

scala> val lemmatized: RDD[Seq[String]] = plainText.mapPartitions(it => {
| val pipeline = createNLPPipeline()
| it.map { case(title, contents) =>
| plainTextToLemmas(contents, stopWords, pipeline)
| }
| })
org.apache.spark.SparkException: Task not serializable
at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:298)
at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:288)
at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:108)
at org.apache.spark.SparkContext.clean(SparkContext.scala:2037)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1.apply(RDD.scala:763)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1.apply(RDD.scala:762)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:358)
at org.apache.spark.rdd.RDD.mapPartitions(RDD.scala:762)
... 65 elided
Caused by: java.io.NotSerializableException: edu.stanford.nlp.pipeline.StanfordCoreNLP
Serialization stack:
- object not serializable (class: edu.stanford.nlp.pipeline.StanfordCoreNLP, value: edu.stanford.nlp.pipeline.StanfordCoreNLP@534c6596)
- field (class: $iw, name: pipeline, type: class edu.stanford.nlp.pipeline.StanfordCoreNLP)
- object (class $iw, $iw@13305ac2)
- field (class: $iw, name: $iw, type: class $iw)
- object (class $iw, $iw@17dda65c)
- field (class: $iw, name: $iw, type: class $iw)
- object (class $iw, $iw@17a5e7a3)
- field (class: $iw, name: $iw, type: class $iw)
- object (class $iw, $iw@2763e9e8)
- field (class: $iw, name: $iw, type: class $iw)
- object (class $iw, $iw@554a639a)
- field (class: $iw, name: $iw, type: class $iw)
- object (class $iw, $iw@29d022c4)
- field (class: $iw, name: $iw, type: class $iw)
- object (class $iw, $iw@49eabc72)
- field (class: $iw, name: $iw, type: class $iw)
- object (class $iw, $iw@4c5d8f2a)
- field (class: $iw, name: $iw, type: class $iw)
- object (class $iw, $iw@550a4f71)
- field (class: $iw, name: $iw, type: class $iw)
- object (class $iw, $iw@7108c04e)
- field (class: $iw, name: $iw, type: class $iw)
- object (class $iw, $iw@52563b54)
- field (class: $iw, name: $iw, type: class $iw)
- object (class $iw, $iw@7522dc83)
- field (class: $iw, name: $iw, type: class $iw)
- object (class $iw, $iw@585fec41)
- field (class: $iw, name: $iw, type: class $iw)
- object (class $iw, $iw@3406f4f4)
- field (class: $iw, name: $iw, type: class $iw)
- object (class $iw, $iw@3c20dd8)
- field (class: $iw, name: $iw, type: class $iw)
- object (class $iw, $iw@5d04d627)
- field (class: $iw, name: $iw, type: class $iw)
- object (class $iw, $iw@c1b9374)
- field (class: $iw, name: $iw, type: class $iw)
- object (class $iw, $iw@20590f3a)
- field (class: $iw, name: $iw, type: class $iw)
- object (class $iw, $iw@1e6a490f)
- field (class: $iw, name: $iw, type: class $iw)
- object (class $iw, $iw@7c623256)
- field (class: $line72.$read, name: $iw, type: class $iw)
- object (class $line72.$read, $line72.$read@3ef632b0)
- field (class: $iw, name: $line72$read, type: class $line72.$read)
- object (class $iw, $iw@706013a)
- field (class: $iw, name: $iw, type: class $iw)
- object (class $iw, $iw@2f3cd4c0)
- field (class: $iw, name: $iw, type: class $iw)
- object (class $iw, $iw@7f8271b5)
- field (class: $iw, name: $iw, type: class $iw)
- object (class $iw, $iw@40e3affc)
- field (class: $iw, name: $iw, type: class $iw)
- object (class $iw, $iw@7e4e8fcd)
- field (class: $iw, name: $iw, type: class $iw)
- object (class $iw, $iw@7fdf4b6a)
- field (class: $iw, name: $iw, type: class $iw)
- object (class $iw, $iw@5d27a11)
- field (class: $iw, name: $iw, type: class $iw)
- object (class $iw, $iw@3b6ab962)
- field (class: $iw, name: $iw, type: class $iw)
- object (class $iw, $iw@73b488ed)
- field (class: $iw, name: $iw, type: class $iw)
- object (class $iw, $iw@3dbb0717)
- field (class: $iw, name: $iw, type: class $iw)
- object (class $iw, $iw@32f55ca9)
- field (class: $iw, name: $iw, type: class $iw)
- object (class $iw, $iw@3fb906a2)
- field (class: $iw, name: $iw, type: class $iw)
- object (class $iw, $iw@1724a440)
- field (class: $iw, name: $iw, type: class $iw)
- object (class $iw, $iw@498a0c1d)
- field (class: $iw, name: $iw, type: class $iw)
- object (class $iw, $iw@40d9ef44)
- field (class: $iw, name: $iw, type: class $iw)
- object (class $iw, $iw@4b1a1336)
- field (class: $iw, name: $iw, type: class $iw)
- object (class $iw, $iw@63bba857)
- field (class: $iw, name: $iw, type: class $iw)
- object (class $iw, $iw@29c356bc)
- field (class: $iw, name: $iw, type: class $iw)
- object (class $iw, $iw@14cc7e3b)
- field (class: $line118.$read, name: $iw, type: class $iw)
- object (class $line118.$read, $line118.$read@4e5d756)
- field (class: $iw, name: $line118$read, type: class $line118.$read)
- object (class $iw, $iw@6f24929a)
- field (class: $iw, name: $outer, type: class $iw)
- object (class $iw, $iw@57dbcf08)
- field (class: $anonfun$1, name: $outer, type: class $iw)
- object (class $anonfun$1, )
at org.apache.spark.serializer.SerializationDebugger$.improveException(SerializationDebugger.scala:40)
at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:46)
at org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:100)
at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:295)
... 74 more

here is my code

`
import com.cloudera.datascience.common.XmlInputFormat
import org.apache.hadoop.conf.Configuration
import org.apache.hadoop.io._

val path = "hdfs://localhost:9000/spark/wikidump.xml"
@transient val conf = new Configuration()
conf.set(XmlInputFormat.START_TAG_KEY, "")
conf.set(XmlInputFormat.END_TAG_KEY, "")

val kvs = sc.newAPIHadoopFile(path, classOf[XmlInputFormat],
classOf[LongWritable], classOf[Text], conf)

val rawXmls = kvs.map(p => p._2.toString)

import edu.umd.cloud9.collection.wikipedia.language._
import edu.umd.cloud9.collection.wikipedia._

def wikiXmlToPlainText(xml: String): Option[(String, String)] = {
val page = new EnglishWikipediaPage()
WikipediaPage.readPage(page, xml)
if(page.isEmpty) None
else Some(page.getTitle, page.getContent)
}

val plainText = rawXmls.flatMap(wikiXmlToPlainText)

import edu.stanford.nlp.pipeline._
import edu.stanford.nlp.ling.CoreAnnotations._
import java.util.Properties
import scala.collection.mutable.ArrayBuffer
import scala.collection.JavaConversions._
import org.apache.spark.rdd.RDD

def createNLPPipeline(): StanfordCoreNLP = {
val props = new Properties()
props.put("annotators", "tokenize, ssplit, pos, lemma")
new StanfordCoreNLP(props)
}

def isOnlyLetter(str: String): Boolean = {
str.forall(c => Character.isLetter(c))
}

def plainTextToLemmas(text: String, stopwords: Set[String],
pipeline: StanfordCoreNLP) : Seq[String] = {

val doc = new Annotation(text)
pipeline.annotate(doc)

val lemmas = new ArrayBuffer[String]()
val sentences = doc.get(classOf[SentencesAnnotation])

for (sentence <- sentences;
    token <- sentence.get(classOf[TokensAnnotation])) {

    val lemma = token.get(classOf[LemmaAnnotation])

    if (lemma.length > 2 && !stopwords.contains(lemma)
        && isOnlyLetter(lemma) ) {
            lemmas += lemma.toLowerCase
    }

}
lemmas

}

val stopWords = sc.broadcast(
scala.io.Source.fromFile("/Users/cheungzee/opdir/sparkLearn/lsa/stopWords.txt").getLines().toSet
).value

val lemmatized: RDD[Seq[String]] = plainText.mapPartitions(it => {
val pipeline = createNLPPipeline()
it.map({
case(title, contents) => plainTextToLemmas(contents, stopWords, pipeline)
})
})

`

from aas.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.