Comments (15)
You're mixing the code from the book with the code from the repo. I think @sryza intended to show a simplified version in the text; in the repo it's refactored a bit so that it works better and faster as a whole. The text appears correct and consistent with itself.
To be clear, you do not import the code in the repo in order to use the book listings.
from aas.
Ch 6 is not by you, noted.
But, with just the ch code, including the import statement for Java conversions, does not run.
from aas.
No, this error is not from the book listing. You can see it is defined to take two arguments:
def plainTextToLemmas(text: String, stopWords: Set[String])
: Seq[String] = {
...
You are showing the declaration from this repo.
from aas.
Not just the import scala.collection.JavaConversions._ which is to be added, the following are to be added.
import scala.collection.mutable.ArrayBuffer
import java.util.Properties
import scala.collection.JavaConversions._
Also,
isOnlyLetters is used in plainTextToLemmas before listing or even mentioning isOnlyLetters.
Even if these are added and plainTextToLemmas is run, the following statement still generates an error.
val lemmatized = plainText.map(plainTextToLemmas(_, stopWords))
The error is
scala> val lemmatized = plainText.map(plainTextToLemmas(_, stopWords))
<console>:56: error: type mismatch;
found : (String, String)
required: String
val lemmatized = plainText.map(plainTextToLemmas(_, stopWords))
Have not used any code from repo other than the import statements and the isOnlyLetters.
from aas.
The error is in 2015-01-21: Early release revision 3.
from aas.
Yes, the imports are fixed in the next draft. isOnlyLetters
is actually not in the book. I assume @sryza thought it was self-explanatory but it does break the ability to use it as-is in the shell. I think that can be added to the listing. The type of plainText
doesn't match, yes. I think @sryza will have to have a look at that next week and maybe copy this part more directly from the repo:
val lemmatized = plainText.mapPartitions(iter => {
val pipeline = createNLPPipeline()
iter.map{ case(title, contents) => (title, plainTextToLemmas(contents, stopWords, pipeline))}
})
from aas.
The following would need to be run before lemmatized is defined.
def loadStopWords(path: String) = scala.io.Source.fromFile(path).getLines().toSet
val stopWords = sc.broadcast(loadStopWords("stopwords.txt")).value
def createNLPPipeline(): StanfordCoreNLP = {
val props = new Properties()
props.put("annotators", "tokenize, ssplit, pos, lemma")
new StanfordCoreNLP(props)
}
from aas.
This part does not necessarily have to change. But something has to change about the definition of lemmatized
in the book listing. Sandy can decide.
from aas.
Changing the book to include loading the stop words, include the definition is isOnlyLetters
, and fix the type of lemmatized
.
The new listing will look like:
import edu.stanford.nlp.pipeline._
import edu.stanford.nlp.ling.CoreAnnotations._
def createNLPPipeline(): StanfordCoreNLP = {
val props = new Properties()
props.put("annotators", "tokenize, ssplit, pos, lemma")
new StanfordCoreNLP(props)
}
def isOnlyLetters(str: String): Boolean = {
str.forall(c => Character.isLetter(c))
}
def plainTextToLemmas(text: String, stopWords: Set[String],
pipeline: StanfordCoreNLP): Seq[String] = {
val doc = new Annotation(text)
pipeline.annotate(doc)
val lemmas = new ArrayBuffer[String]()
val sentences = doc.get(classOf[SentencesAnnotation])
for (sentence <- sentences;
token <- sentence.get(classOf[TokensAnnotation])) {
val lemma = token.get(classOf[LemmaAnnotation])
if (lemma.length > 2 && !stopWords.contains(lemma)
&& isOnlyLetters(lemma)) {
lemmas += lemma.toLowerCase
}
}
lemmas
}
val stopWords = sc.broadcast(
scala.io.Source.fromFile("stopwords.txt).getLines().toSet).value
val lemmatized: RDD[Seq[String]] = plainText.mapPartitions(it => {
val pipeline = createNLPPipeline()
it.map { case(title, contents) =>
plainTextToLemmas(contents, stopWords, pipeline)
}
})
Thanks for reporting this @dvohra.
from aas.
Thanks for fixing the ch 6 code sryza.
from aas.
Tried to follow @sryza 10 Mar. But where is "stopwords.txt"? It threw a exception:
java.io.FileNotFoundException: stopwords.txt
from aas.
The file is here: https://github.com/sryza/aas/blob/master/ch06-lsa/src/main/resources/stopwords.txt
You can download it and put it in in your local working directory. (Or anywhere you like locally and change that path.)
@sryza is this worth a little errata to note where the file is supposed to come from in the text?
from aas.
@srowen yes, will make it more explicit.
from aas.
As all resource files are from the aas github ( https://github.com/sryza/aas) would be more suitable add a general statement rather than a per file statement.
from aas.
@sryza I use the codes like yours, however there are sth. strange to me happended,
I don't know why it will turn out to be like this, plz help.
scala> val lemmatized: RDD[Seq[String]] = plainText.mapPartitions(it => {
| val pipeline = createNLPPipeline()
| it.map { case(title, contents) =>
| plainTextToLemmas(contents, stopWords, pipeline)
| }
| })
org.apache.spark.SparkException: Task not serializable
at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:298)
at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:288)
at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:108)
at org.apache.spark.SparkContext.clean(SparkContext.scala:2037)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1.apply(RDD.scala:763)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1.apply(RDD.scala:762)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:358)
at org.apache.spark.rdd.RDD.mapPartitions(RDD.scala:762)
... 65 elided
Caused by: java.io.NotSerializableException: edu.stanford.nlp.pipeline.StanfordCoreNLP
Serialization stack:
- object not serializable (class: edu.stanford.nlp.pipeline.StanfordCoreNLP, value: edu.stanford.nlp.pipeline.StanfordCoreNLP@534c6596)
- field (class: $iw, name: pipeline, type: class edu.stanford.nlp.pipeline.StanfordCoreNLP)
- object (class $iw, $iw@13305ac2)
- field (class: $iw, name: $iw, type: class $iw)
- object (class $iw, $iw@17dda65c)
- field (class: $iw, name: $iw, type: class $iw)
- object (class $iw, $iw@17a5e7a3)
- field (class: $iw, name: $iw, type: class $iw)
- object (class $iw, $iw@2763e9e8)
- field (class: $iw, name: $iw, type: class $iw)
- object (class $iw, $iw@554a639a)
- field (class: $iw, name: $iw, type: class $iw)
- object (class $iw, $iw@29d022c4)
- field (class: $iw, name: $iw, type: class $iw)
- object (class $iw, $iw@49eabc72)
- field (class: $iw, name: $iw, type: class $iw)
- object (class $iw, $iw@4c5d8f2a)
- field (class: $iw, name: $iw, type: class $iw)
- object (class $iw, $iw@550a4f71)
- field (class: $iw, name: $iw, type: class $iw)
- object (class $iw, $iw@7108c04e)
- field (class: $iw, name: $iw, type: class $iw)
- object (class $iw, $iw@52563b54)
- field (class: $iw, name: $iw, type: class $iw)
- object (class $iw, $iw@7522dc83)
- field (class: $iw, name: $iw, type: class $iw)
- object (class $iw, $iw@585fec41)
- field (class: $iw, name: $iw, type: class $iw)
- object (class $iw, $iw@3406f4f4)
- field (class: $iw, name: $iw, type: class $iw)
- object (class $iw, $iw@3c20dd8)
- field (class: $iw, name: $iw, type: class $iw)
- object (class $iw, $iw@5d04d627)
- field (class: $iw, name: $iw, type: class $iw)
- object (class $iw, $iw@c1b9374)
- field (class: $iw, name: $iw, type: class $iw)
- object (class $iw, $iw@20590f3a)
- field (class: $iw, name: $iw, type: class $iw)
- object (class $iw, $iw@1e6a490f)
- field (class: $iw, name: $iw, type: class $iw)
- object (class $iw, $iw@7c623256)
- field (class: $line72.$read, name: $iw, type: class $iw)
- object (class $line72.$read, $line72.$read@3ef632b0)
- field (class: $iw, name: $line72$read, type: class $line72.$read)
- object (class $iw, $iw@706013a)
- field (class: $iw, name: $iw, type: class $iw)
- object (class $iw, $iw@2f3cd4c0)
- field (class: $iw, name: $iw, type: class $iw)
- object (class $iw, $iw@7f8271b5)
- field (class: $iw, name: $iw, type: class $iw)
- object (class $iw, $iw@40e3affc)
- field (class: $iw, name: $iw, type: class $iw)
- object (class $iw, $iw@7e4e8fcd)
- field (class: $iw, name: $iw, type: class $iw)
- object (class $iw, $iw@7fdf4b6a)
- field (class: $iw, name: $iw, type: class $iw)
- object (class $iw, $iw@5d27a11)
- field (class: $iw, name: $iw, type: class $iw)
- object (class $iw, $iw@3b6ab962)
- field (class: $iw, name: $iw, type: class $iw)
- object (class $iw, $iw@73b488ed)
- field (class: $iw, name: $iw, type: class $iw)
- object (class $iw, $iw@3dbb0717)
- field (class: $iw, name: $iw, type: class $iw)
- object (class $iw, $iw@32f55ca9)
- field (class: $iw, name: $iw, type: class $iw)
- object (class $iw, $iw@3fb906a2)
- field (class: $iw, name: $iw, type: class $iw)
- object (class $iw, $iw@1724a440)
- field (class: $iw, name: $iw, type: class $iw)
- object (class $iw, $iw@498a0c1d)
- field (class: $iw, name: $iw, type: class $iw)
- object (class $iw, $iw@40d9ef44)
- field (class: $iw, name: $iw, type: class $iw)
- object (class $iw, $iw@4b1a1336)
- field (class: $iw, name: $iw, type: class $iw)
- object (class $iw, $iw@63bba857)
- field (class: $iw, name: $iw, type: class $iw)
- object (class $iw, $iw@29c356bc)
- field (class: $iw, name: $iw, type: class $iw)
- object (class $iw, $iw@14cc7e3b)
- field (class: $line118.$read, name: $iw, type: class $iw)
- object (class $line118.$read, $line118.$read@4e5d756)
- field (class: $iw, name: $line118$read, type: class $line118.$read)
- object (class $iw, $iw@6f24929a)
- field (class: $iw, name: $outer, type: class $iw)
- object (class $iw, $iw@57dbcf08)
- field (class: $anonfun$1, name: $outer, type: class $iw)
- object (class $anonfun$1, )
at org.apache.spark.serializer.SerializationDebugger$.improveException(SerializationDebugger.scala:40)
at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:46)
at org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:100)
at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:295)
... 74 more
here is my code
`
import com.cloudera.datascience.common.XmlInputFormat
import org.apache.hadoop.conf.Configuration
import org.apache.hadoop.io._
val path = "hdfs://localhost:9000/spark/wikidump.xml"
@transient val conf = new Configuration()
conf.set(XmlInputFormat.START_TAG_KEY, "")
conf.set(XmlInputFormat.END_TAG_KEY, "")
val kvs = sc.newAPIHadoopFile(path, classOf[XmlInputFormat],
classOf[LongWritable], classOf[Text], conf)
val rawXmls = kvs.map(p => p._2.toString)
import edu.umd.cloud9.collection.wikipedia.language._
import edu.umd.cloud9.collection.wikipedia._
def wikiXmlToPlainText(xml: String): Option[(String, String)] = {
val page = new EnglishWikipediaPage()
WikipediaPage.readPage(page, xml)
if(page.isEmpty) None
else Some(page.getTitle, page.getContent)
}
val plainText = rawXmls.flatMap(wikiXmlToPlainText)
import edu.stanford.nlp.pipeline._
import edu.stanford.nlp.ling.CoreAnnotations._
import java.util.Properties
import scala.collection.mutable.ArrayBuffer
import scala.collection.JavaConversions._
import org.apache.spark.rdd.RDD
def createNLPPipeline(): StanfordCoreNLP = {
val props = new Properties()
props.put("annotators", "tokenize, ssplit, pos, lemma")
new StanfordCoreNLP(props)
}
def isOnlyLetter(str: String): Boolean = {
str.forall(c => Character.isLetter(c))
}
def plainTextToLemmas(text: String, stopwords: Set[String],
pipeline: StanfordCoreNLP) : Seq[String] = {
val doc = new Annotation(text)
pipeline.annotate(doc)
val lemmas = new ArrayBuffer[String]()
val sentences = doc.get(classOf[SentencesAnnotation])
for (sentence <- sentences;
token <- sentence.get(classOf[TokensAnnotation])) {
val lemma = token.get(classOf[LemmaAnnotation])
if (lemma.length > 2 && !stopwords.contains(lemma)
&& isOnlyLetter(lemma) ) {
lemmas += lemma.toLowerCase
}
}
lemmas
}
val stopWords = sc.broadcast(
scala.io.Source.fromFile("/Users/cheungzee/opdir/sparkLearn/lsa/stopWords.txt").getLines().toSet
).value
val lemmatized: RDD[Seq[String]] = plainText.mapPartitions(it => {
val pipeline = createNLPPipeline()
it.map({
case(title, contents) => plainTextToLemmas(contents, stopWords, pipeline)
})
})
`
from aas.
Related Issues (20)
- Pyspark implementation of these HOT 1
- Chapter 9: Getting the Data: 403 Forbidden results HOT 7
- Chapter 10 LeftOuterShuffleRegionJoin issue HOT 6
- Importing projects into IntelliJ HOT 2
- [ch-03] match Error with function: buildArtistAlias HOT 3
- transform RDD [(String, String)] to DATASET [ (String, String)] HOT 1
- Ch 03: audioscrobbler data not available HOT 2
- NullPointerException in chapter9 HOT 1
- how to work around "next on empty iterator"in chapter 9th? HOT 11
- ValueError: cannot decompress PACKBITS in chapter11 HOT 4
- [Question] Chapter 3 - Use the CROSS JOIN syntax to allow cartesian products between these relations HOT 4
- java.lang.NoClassDefFoundError: scala/reflect/internal/Trees HOT 1
- [Question] Chapter 2 - about function "scoreMatchData" HOT 5
- but which data set to download from https://ti.arc.nasa.gov HOT 1
- Increased maven memory for build project HOT 1
- Chapter 1, page 46 HOT 13
- Chapter 3. Recommending Music and the Audioscrobbler Dataset HOT 2
- Chapter 3: Convert PySpark DataFrame to Pandas HOT 4
- Chapter 3: ROC HOT 4
- Where is the code for AA with PySpark Book by Akash Tandon? HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from aas.