hazelcast / hazelcast-spark Goto Github PK
View Code? Open in Web Editor NEWSpark Connector for Hazelcast
License: Apache License 2.0
Spark Connector for Hazelcast
License: Apache License 2.0
I have below configuration and one HC server running with that.
<map name="mycache">
<in-memory-format>BINARY</in-memory-format>
<statistics-enabled>true</statistics-enabled>
<optimize-queries>true</optimize-queries>
<cache-deserialized-values>INDEX-ONLY</cache-deserialized-values>
<backup-count>1</backup-count>
<async-backup-count>0</async-backup-count>
<time-to-live-seconds>0</time-to-live-seconds>
<max-idle-seconds>0</max-idle-seconds>
<eviction-policy>NONE</eviction-policy>
<max-size policy="PER_NODE">1000000</max-size>
<eviction-percentage>25</eviction-percentage>
<cache-deserialized-values>INDEX-ONLY</cache-deserialized-values>
<near-cache>
<max-size>1000000</max-size>
<time-to-live-seconds>0</time-to-live-seconds>
<max-idle-seconds>60</max-idle-seconds>
<eviction-policy>LRU</eviction-policy>
<invalidate-on-change>true</invalidate-on-change>
<in-memory-format>BINARY</in-memory-format>
<cache-local-entries>false</cache-local-entries>
<eviction size="1000" max-size-policy="ENTRY_COUNT" eviction-policy="LFU"/>
</near-cache>
</map>
And In my Producer Spark App, I have million User records generated and saveToHazelcastMap() and count of cached rdd is a million which is as expected.
And In my Consumer Spark App, when i read cache using fromHazelcastMap(), i get only 10k records. I tried to tweak the default configs, but it didnt help. It looks like a bug to me.
I have a spark cluster running in Amazon EMR, and I have set a tag based AwsConfig in the Hazelcast config, disabling the multicast one. Then got the following exception:
17/04/23 08:45:15 ERROR [Driver] ApplicationMaster: User class threw exception: java.util.NoSuchElementException: hazelcast.server.addresses
java.util.NoSuchElementException: hazelcast.server.addresses
at org.apache.spark.SparkConf$$anonfun$get$1.apply(SparkConf.scala:235)
at org.apache.spark.SparkConf$$anonfun$get$1.apply(SparkConf.scala:235)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.SparkConf.get(SparkConf.scala:235)
at com.hazelcast.spark.connector.conf.ConfigurationProperties$.getServerAddress(ConfigurationProperties.scala:30)
at com.hazelcast.spark.connector.conf.SerializableConf.<init>(SerializableConf.scala:8)
at com.hazelcast.spark.connector.rdd.HazelcastRDDFunctions.<init>(HazelcastRDDFunctions.scala:18)
at com.hazelcast.spark.connector.package$.toHazelcastRDDFunctions(package.scala:15)
at ...
Looking at the code, I saw that the hazelcast.server.addresses is mandatory and has no default value like the README says.
Also, it seems it wouldn't work if I want to use just an external configuration file, and not add a configuration in the SparkConf.
Is it a bug or intended behavior? How do I create an hazelcast cluster based on the EMR membership?
Is there any work being done to provide support fro Spark 2.0?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.