ttauveron / k8s-big-data-experiments Goto Github PK
View Code? Open in Web Editor NEWExperiments produced during an end of studies project (ETS, H2018)
Experiments produced during an end of studies project (ETS, H2018)
19/02/13 07:47:16 INFO LineBufferedStream: stdout: volumes: spark-init-properties, download-jars-volume, download-files-volume, default-token-qr75q
19/02/13 07:47:16 INFO LineBufferedStream: stdout: node name: ip-10-0-1-206.ec2.internal
19/02/13 07:47:16 INFO LineBufferedStream: stdout: start time: 2019-02-13T07:47:09Z
19/02/13 07:47:16 INFO LineBufferedStream: stdout: container images: gnut3ll4/spark:v1.0.14
19/02/13 07:47:16 INFO LineBufferedStream: stdout: phase: Pending
19/02/13 07:47:16 INFO LineBufferedStream: stdout: status: [ContainerStatus(containerID=null, image=gnut3ll4/spark:v1.0.14, imageID=, lastState=ContainerState(running=null, terminated=null, waiting=null, additionalProperties={}), name=spark-kubernetes-driver, ready=false, restartCount=0, state=ContainerState(running=null, terminated=null, waiting=ContainerStateWaiting(message=null, reason=PodInitializing, additionalProperties={}), additionalProperties={}), additionalProperties={})]
19/02/13 07:47:17 INFO LineBufferedStream: stdout: 2019-02-13 07:47:17 INFO LoggingPodStatusWatcherImpl:54 - State changed, new state:
19/02/13 07:47:17 INFO LineBufferedStream: stdout: pod name: blockdirectionandvelocitycalculator-29555b1a814c3546b5b2b02421be48ca-driver
19/02/13 07:47:17 INFO LineBufferedStream: stdout: namespace: default
19/02/13 07:47:17 INFO LineBufferedStream: stdout: labels: spark-app-selector -> spark-76932595d11747f0a6d9bb713dc4a4ac, spark-role -> driver
19/02/13 07:47:17 INFO LineBufferedStream: stdout: pod uid: 90f8960a-2f63-11e9-916c-0eb1657f013c
19/02/13 07:47:17 INFO LineBufferedStream: stdout: creation time: 2019-02-13T07:47:09Z
19/02/13 07:47:17 INFO LineBufferedStream: stdout: service account name: default
19/02/13 07:47:17 INFO LineBufferedStream: stdout: volumes: spark-init-properties, download-jars-volume, download-files-volume, default-token-qr75q
19/02/13 07:47:17 INFO LineBufferedStream: stdout: node name: ip-10-0-1-206.ec2.internal
19/02/13 07:47:17 INFO LineBufferedStream: stdout: start time: 2019-02-13T07:47:09Z
19/02/13 07:47:17 INFO LineBufferedStream: stdout: container images: gnut3ll4/spark:v1.0.14
19/02/13 07:47:17 INFO LineBufferedStream: stdout: phase: Running
19/02/13 07:47:17 INFO LineBufferedStream: stdout: status: [ContainerStatus(containerID=docker://e02438a30d2097174b2e1f9ac1cc51556ab5367f3854080819b6b7a51dbdc511, image=gnut3ll4/spark:v1.0.14, imageID=docker-pullable://gnut3ll4/spark@sha256:b762f966b5f6ec43eb3f14a8214a12e7dbd93f3e14e71ccc7b43d3db4f0cbca6, lastState=ContainerState(running=null, terminated=null, waiting=null, additionalProperties={}), name=spark-kubernetes-driver, ready=true, restartCount=0, state=ContainerState(running=ContainerStateRunning(startedAt=Time(time=2019-02-13T07:47:17Z, additionalProperties={}), additionalProperties={}), terminated=null, waiting=null, additionalProperties={}), additionalProperties={})]
19/02/13 07:48:43 INFO BatchSession: Stopping BatchSession 21...
Exception in thread "Thread-57" java.io.IOException: Stream closed
at java.io.BufferedInputStream.getBufIfOpen(BufferedInputStream.java:170)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:283)
at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:284)
at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:326)
at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:178)
at java.io.InputStreamReader.read(InputStreamReader.java:184)
at java.io.BufferedReader.fill(BufferedReader.java:161)
at java.io.BufferedReader.readLine(BufferedReader.java:324)
at java.io.BufferedReader.readLine(BufferedReader.java:389)
at scala.io.BufferedSource$BufferedLineIterator.hasNext(BufferedSource.scala:72)
at scala.collection.Iterator$class.foreach(Iterator.scala:893)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
at org.apache.livy.utils.LineBufferedStream$$anon$1.run(LineBufferedStream.scala:40)
19/02/13 07:48:43 WARN BatchSession$: spark-submit exited with code 143
19/02/13 07:48:43 ERROR SparkProcApp: spark-submit exited with code 143
19/02/13 07:48:43 INFO BatchSession: Stopped BatchSession 21.
NAME READY STATUS RESTARTS AGE
1e8d8405f9336b0a1ecd5c2e0c26443-gcgxm 1/1 Running 0 23d
bit-on-bottom-85cb5dc68b-dlf8c 1/1 Running 0 13d
blockdirectionandvelocitycalculator-29555b1a814c3546b5b2b02421be48ca-driver 1/1 Running 0 5m54s
blockdirectionandvelocitycalculator-29555b1a814c3546b5b2b02421be48ca-exec-1 1/1 Running 0 5m41s
Il semble que les noeuds master et worker de Spark aient besoin des mêmes librairies (jars), notamment hadoop-aws et aws-java-sdk.
Actuellement, les librairies sont dans l'image docker, ce qui ne pose pas de problème pour un nombre relativement petit de workers, mais peut causer un surplus d'espace utilisé avec un nombre important de worker.
Une solution serait d'ajouter une entité chargée d'héberger les libraires et dont le chemin réseau peut être monté par les pods spark.
Apparemment, Kubernetes a déjà des mécanismes qui le permettent :
https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/
https://kubernetes.io/docs/concepts/storage/persistent-volumes/
Il pourrait aussi être envisageable d'ajouter un SSHFS (s'il doit copier les librairies, le résultat est potentiellement le même que de les ajouter à l'image), ou de laisser les choses comme elles sont actuellement.
Il existe un package helm avec une configuration pour S3 ou Azure blob storage
https://banzaicloud.com/blog/spark-history-server/
Quand spark-submit utilise le scheduler de kubernetes, il déclenche la création de pods pour le driver spark et les executors. Il faudrait ajouter une configuration à spark pour étiqueter (ajouter des labels) ces pods et pouvoir faire le ménage facilement par la suite.
https://spark.apache.org/docs/latest/running-on-kubernetes.html
Ces deux lignes devraient faire l'affaire
spark.kubernetes.driver.label.[LabelName]
spark.kubernetes.executor.label.[LabelName]
Une job Spark doit pouvoir lire des fichiers situés sur Azure blob storage, à l'instar de S3.
Les jars à inclure pour lire des fichiers sur Azure blob storage
https://medium.com/@timfpark/cloud-native-big-data-jobs-with-spark-2-3-and-kubernetes-938b04d0da57
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.