Coder Social home page Coder Social logo

Comments (1)

reedv avatar reedv commented on June 9, 2024

Even when defining a SparkContext beforehand (as recommended here: https://stackoverflow.com/a/49832046/8236733).

Relevant code snippet is

    conf = SparkConf()
    conf.set("spark.app.name", application_name)
    conf.set("spark.master", master)
    conf.set("spark.executor.cores", `num_cores`)
    conf.set("spark.executor.instances", `num_executors`)
    conf.set("spark.locality.wait", "0")
    conf.set("spark.serializer", "org.apache.spark.serializer.KryoSerializer");
    
    # Check if the user is running Spark 2.0 +
    if using_spark_2:
    #     sc = SparkSession.builder.config(conf=conf) \
    #             .appName(application_name) \
    #             .getOrCreate()
    #     print sc.version
          sc = SparkContext(conf=conf)
          print sc.version
          ss = SparkSession(sc).appName(application_name).getOrCreate()

which generates the output

    2.1.0-mapr-1710
    
    ---------------------------------------------------------------------------
    NameError                                 Traceback (most recent call last)
    <ipython-input-8-5e35501cdf3c> in <module>()
         15       sc = SparkContext(conf=conf)
         16       print sc.version
    ---> 17       ss = SparkSession(sc).appName(application_name).getOrCreate()
         18       print ss.version
         19 else:
    
    NameError: name 'SparkSession' is not defined

Never used pyspark before and very confused since other docs/articles I have seen seem to indicate that initializinng a SparkContext was not needed to use SparkSession in spark2 (eg. here https://www.cloudera.com/documentation/data-science-workbench/latest/topics/cdsw_pyspark.html#pyspark_setup__local_mode, here https://databricks.com/blog/2016/08/15/how-to-use-sparksession-in-apache-spark-2-0.html, or here https://sparkour.urizone.net/recipes/understanding-sparksession/#toc), but this did not work (as can be seen in the commented code above). Note my environment variables look like:

    import os
    print os.environ['SPARK_HOME']
    print os.environ['PYTHONPATH']
    # since I'm using MapR hadoop and require a security ticket
    os.environ['MAPR_TICKETFILE_LOCATION'] = "/tmp/maprticket_10003"

    #output
    /opt/mapr/spark/spark-2.1.0
    /opt/mapr/spark/spark-2.1.0/python/:/opt/mapr/spark/spark-2.1.0/python/lib/py4j-0.10.4-src.zip:/opt/mapr/spark/spark-2.1.0/python/:/opt/mapr/spark/spark-2.1.0/python/lib/py4j-0.10.4-src.zip:/opt/mapr/spark/spark-2.1.0/python/:/opt/mapr/spark/spark-2.1.0/python/lib/py4j-0.10.4-src.zip:/opt/mapr/spark/spark-2.1.0/python/:/opt/mapr/spark/spark-2.1.0/python/lib/py4j-0.10.4-src.zip:

What does seem to work is...

    conf = SparkConf()
    conf.set("spark.app.name", application_name)
    conf.set("spark.master", master)
    conf.set("spark.executor.cores", `num_cores`)
    conf.set("spark.executor.instances", `num_executors`)
    conf.set("spark.locality.wait", "0")
    conf.set("spark.serializer", "org.apache.spark.serializer.KryoSerializer");
    
    # Check if the user is running Spark 2.0 +
    if using_spark_2:
         from pyspark.sql import SparkSession
         sc = SparkSession.builder.config(conf=conf) \
                 .appName(application_name) \
                 .getOrCreate()
         print sc.version

whereas the pyspark related imports were just

from pyspark import SparkContext
from pyspark import SparkConf
from pyspark.ml.feature import StandardScaler
from pyspark.ml.feature import VectorAssembler
from pyspark.ml.feature import StringIndexer
from pyspark.ml.evaluation import MulticlassClassificationEvaluator
from pyspark.mllib.evaluation import BinaryClassificationMetrics

Again, never used spark-anything before, so the fact that the original code did not import that package that seems to make things work here makes me question whether I'm using the right kind of SparkSession. Is using pyspark.sql package to right thing to do or should I just be expecting SparkSession to be valid after creating a SparkContext (without having to import anything other than the original imports)?

Any explaination as to what is going on here would be appreciated.

** The full original code that I am trying to get to work can be found here: https://github.com/cerndb/dist-keras/blob/master/examples/workflow.ipynb

from dist-keras.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.