Comments (1)
Even when defining a SparkContext beforehand (as recommended here: https://stackoverflow.com/a/49832046/8236733).
Relevant code snippet is
conf = SparkConf()
conf.set("spark.app.name", application_name)
conf.set("spark.master", master)
conf.set("spark.executor.cores", `num_cores`)
conf.set("spark.executor.instances", `num_executors`)
conf.set("spark.locality.wait", "0")
conf.set("spark.serializer", "org.apache.spark.serializer.KryoSerializer");
# Check if the user is running Spark 2.0 +
if using_spark_2:
# sc = SparkSession.builder.config(conf=conf) \
# .appName(application_name) \
# .getOrCreate()
# print sc.version
sc = SparkContext(conf=conf)
print sc.version
ss = SparkSession(sc).appName(application_name).getOrCreate()
which generates the output
2.1.0-mapr-1710
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-8-5e35501cdf3c> in <module>()
15 sc = SparkContext(conf=conf)
16 print sc.version
---> 17 ss = SparkSession(sc).appName(application_name).getOrCreate()
18 print ss.version
19 else:
NameError: name 'SparkSession' is not defined
Never used pyspark before and very confused since other docs/articles I have seen seem to indicate that initializinng a SparkContext was not needed to use SparkSession in spark2 (eg. here https://www.cloudera.com/documentation/data-science-workbench/latest/topics/cdsw_pyspark.html#pyspark_setup__local_mode, here https://databricks.com/blog/2016/08/15/how-to-use-sparksession-in-apache-spark-2-0.html, or here https://sparkour.urizone.net/recipes/understanding-sparksession/#toc), but this did not work (as can be seen in the commented code above). Note my environment variables look like:
import os
print os.environ['SPARK_HOME']
print os.environ['PYTHONPATH']
# since I'm using MapR hadoop and require a security ticket
os.environ['MAPR_TICKETFILE_LOCATION'] = "/tmp/maprticket_10003"
#output
/opt/mapr/spark/spark-2.1.0
/opt/mapr/spark/spark-2.1.0/python/:/opt/mapr/spark/spark-2.1.0/python/lib/py4j-0.10.4-src.zip:/opt/mapr/spark/spark-2.1.0/python/:/opt/mapr/spark/spark-2.1.0/python/lib/py4j-0.10.4-src.zip:/opt/mapr/spark/spark-2.1.0/python/:/opt/mapr/spark/spark-2.1.0/python/lib/py4j-0.10.4-src.zip:/opt/mapr/spark/spark-2.1.0/python/:/opt/mapr/spark/spark-2.1.0/python/lib/py4j-0.10.4-src.zip:
What does seem to work is...
conf = SparkConf()
conf.set("spark.app.name", application_name)
conf.set("spark.master", master)
conf.set("spark.executor.cores", `num_cores`)
conf.set("spark.executor.instances", `num_executors`)
conf.set("spark.locality.wait", "0")
conf.set("spark.serializer", "org.apache.spark.serializer.KryoSerializer");
# Check if the user is running Spark 2.0 +
if using_spark_2:
from pyspark.sql import SparkSession
sc = SparkSession.builder.config(conf=conf) \
.appName(application_name) \
.getOrCreate()
print sc.version
whereas the pyspark related imports were just
from pyspark import SparkContext
from pyspark import SparkConf
from pyspark.ml.feature import StandardScaler
from pyspark.ml.feature import VectorAssembler
from pyspark.ml.feature import StringIndexer
from pyspark.ml.evaluation import MulticlassClassificationEvaluator
from pyspark.mllib.evaluation import BinaryClassificationMetrics
Again, never used spark-anything before, so the fact that the original code did not import that package that seems to make things work here makes me question whether I'm using the right kind of SparkSession. Is using pyspark.sql package to right thing to do or should I just be expecting SparkSession to be valid after creating a SparkContext (without having to import anything other than the original imports)?
Any explaination as to what is going on here would be appreciated.
** The full original code that I am trying to get to work can be found here: https://github.com/cerndb/dist-keras/blob/master/examples/workflow.ipynb
from dist-keras.
Related Issues (20)
- No module named 'pwd' HOT 2
- Training an autoencoder taking long time
- How to plot accuracy or loss for training and validation data HOT 1
- Error in prediction with multi-features LTSM autoencoder
- TypeError: softmax() got an unexpected keyword argument 'axis'
- Model is not getting trained properly
- How to train keras features on non-redundant/infinite set of labels
- 'SequentialWorker' object has no attribute 'add_history' HOT 1
- Installing dist-keras: No matching distribution found for tensorflow
- #How to use Dist-Keras with Pipeline Spark ?
- Is it possible to implement dist-keras and run in local machine for this Keras Model HOT 1
- jupyter kernel dies when running /examples/workflow.ipynb HOT 2
- unexpected keyword argument 'learning_rate' in DOWNPOUR HOT 2
- LSTM with DIST keras : Weight doesn't get updated
- How to scale a vector using standardscaler?
- dist-keras results are not stable
- Keras and dist-keras results differ
- File "/home/hadoop/anaconda2/lib/python2.7/site-packages/scipy/ndimage/filters.py", line 36, in <module> from . import _ni_support ImportError: cannot import name _ni_support
- please I can run dis-kers on google Google colab
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from dist-keras.