Coder Social home page Coder Social logo

pingcap / tispark Goto Github PK

View Code? Open in Web Editor NEW
878.0 48.0 252.0 29.99 MB

TiSpark is built for running Apache Spark on top of TiDB/TiKV

License: Apache License 2.0

Scala 56.97% Shell 0.09% Makefile 0.01% Java 39.99% ANTLR 2.71% Starlark 0.25%
bigdata spark tidb tikv

tispark's Introduction

TiSpark

Maven Central License

TiSpark is a thin layer built for running Apache Spark on top of TiDB/TiKV/TiFlash to answer complex OLAP queries. It enjoys the merits of both the Spark platform and the distributed clusters of TiKV/TiFlash while seamlessly integrated to TiDB.

The figure below show the architecture of TiSpark.

architecture

  • TiSpark integrates well with the Spark Catalyst Engine. It provides precise control of computing, which allows Spark to read data from TiKV efficiently. It also supports index seek, which significantly improves the performance of the point query execution.
  • It utilizes several strategies to push down computing to reduce the size of dataset handling by Spark SQL, which accelerates query execution. It also uses the TiDB built-in statistical information for the query plan optimization.
  • From the perspective of data integration, TiSpark + TiDB provides a solution that performs both transaction and analysis directly on the same platform without building and maintaining any ETLs. It simplifies the system architecture and reduces the cost of maintenance.
  • In addition, you can deploy and utilize the tools from the Spark ecosystem for further data processing and manipulation on TiDB. For example, using TiSpark for data analysis and ETL, retrieving data from TiKV as a data source for machine learning, generating reports from the scheduling system and so on.

TiSpark relies on the availability of TiKV clusters and PDs. You also need to set up and use the Spark clustering platform.

Most of the TiSpark logic is inside a thin layer, namely, the tikv-client library.

Doc TOC

About mysql-connector-java

We will not provide the mysql-connector-java dependency because of the limit of the GPL license.

The following versions of TiSpark's jar will no longer include mysql-connector-java.

  • TiSpark > 3.0.1
  • TiSpark > 2.5.1 for TiSpark 2.5.x
  • TiSpark > 2.4.3 for TiSpark 2.4.x

Now, TiSpark needs mysql-connector-java for writing and auth. Please import mysql-connector-java manually when you need to write or auth.

  • you can import it by putting the jar into spark jars file

  • you can also import it when you submit spark job like

spark-submit --jars tispark-assembly-3.0_2.12-3.1.0-SNAPSHOT.jar,mysql-connector-java-8.0.29.jar

Feature Support

Feature Support TiSpark 2.4.x TiSpark 2.5.x TiSpark 3.0.x TiSpark master
SQL select without tidb_catalog
SQL select with tidb_catalog
SQL delete from with tidb_catalog
DataFrame append
DataFrame reads

see here for more detail.

Limitations

  • TiDB starts to support view since tidb-3.0. TiSpark currently does not support view. Users are not be able to observe or access data through view with TiSpark.

  • Spark config spark.sql.runSQLOnFiles should not be set to false, or you may got Error in query: Table or view not found error.

  • Using the style of "{db}.{table}.{colname}" in the condition is not supported, e.g. select * from t where db.t.col1 = 1.

  • Null in aggregration is not supported, e.g. select sum(null) from t group by col1.

  • The dependency tispark-assembly should not be packaged into JAR of JARS file (for example, build with spring-boot-maven-plugin), or you will get ClassNotFoundException. You can solve it by adding spark-wrapper-spark-version in your dependency or constructing another forms of jar file.

  • TiSpark doesn't support GBK character set.

  • TiSpark doesn't support the whole collations rule. Currently, TiSpark only supports the following collations: utf8_bin, utf8_general_ci, utf8_unicode_ci, utf8mb4_bin, utf8mb4_general_ci and utf8mb4_unicode_ci.

  • If spark.sql.ansi.enabled is false an overflow of sum(bigint) will not cause an error but “wrap” the result, or you can cast bigint to decimal to avoid the overflow.

  • TiSpark supports retrieving data from table with Expression Index, but the Expression Index will not be used by the planner of TiSpark.

Follow us

Twitter

@PingCAP

Forums

For English users, go to TiDB internals.

For Chinese users, go to AskTUG.

License

TiSpark is under the Apache 2.0 license. See the LICENSE file for details.

tispark's People

Contributors

birdstorm avatar daemonxiao avatar dependabot[bot] avatar dieken avatar fenghaojiang avatar gkdoc avatar guliangliangatpingcap avatar handora avatar humengyu2012 avatar ilovesoup avatar liancheng avatar mahjonp avatar marsishandsome avatar novemser avatar purelind avatar qidi1 avatar shiyuhang0 avatar sre-bot avatar ti-chi-bot avatar ti-srebot avatar trafalgarricardolu avatar windtalker avatar wolfstudy avatar wsabc01 avatar wuhuizuo avatar xuanyu66 avatar yegetables avatar zanmato1984 avatar zhangyangyu avatar zhexuany avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tispark's Issues

Inconsistent result when comparing DateTime with Timestamp

Known affected branch:

master

Schema:

+-------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Table | Create Table                                                                                                                                                      |
+-------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| tb_n  | CREATE TABLE `tb_n` (
  `dt` datetime DEFAULT CURRENT_TIMESTAMP,
  `ts` timestamp DEFAULT CURRENT_TIMESTAMP
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin |
+-------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------+

Data:

+---------------------+---------------------+
| dt                  | ts                  |
+---------------------+---------------------+
| 2017-12-11 11:38:36 | 2017-12-11 11:38:36 |
+---------------------+---------------------+
1 row in set (0.01 sec)

SQL:

select * from tb_n where dt = ts

TiSpark:
Nothing selected.

Spark with JDBC:
TiDB:

+---------------------+---------------------+
| dt                  | ts                  |
+---------------------+---------------------+
| 2017-12-11 11:38:36 | 2017-12-11 11:38:36 |
+---------------------+---------------------+

TiSpark plan:

== Physical Plan ==
TiDB CoprocessorRDD{[table: tb_n] , Ranges: Start:[-9223372036854775808], End: [9223372036854775807], Columns: [dt], [ts], Filter: Not(IsNull([dt])), Not(IsNull([ts])), Equal([dt], [ts])}

Seems we pushed down Equal([dt], [ts]) and TiKV filtered all rows.

Error handling constant value in aggregation function

If we pass constant characters as parameter to aggregation function like sum() avg() etc., spark will encounter unexpected problems.

For query spark.sql("select avg('some_random_word') from customer").show, spark throw exceptions as below:

17/10/10 17:34:48 ERROR Executor: Exception in task 1.0 in stage 20.0 (TID 93)
java.lang.ArrayIndexOutOfBoundsException: 1
	at com.pingcap.tikv.row.ObjectRowImpl.set(ObjectRowImpl.java:155)
	at com.pingcap.tikv.operation.transformer.Cast.set(Cast.java:33)
	at com.pingcap.tikv.operation.transformer.RowTransformer.transform(RowTransformer.java:106)
	at com.pingcap.tispark.TiRDD$$anon$1.toSparkRow(TiRDD.scala:58)
	at com.pingcap.tispark.TiRDD$$anon$1.next(TiRDD.scala:70)
	at com.pingcap.tispark.TiRDD$$anon$1.next(TiRDD.scala:50)
	at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
	at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.agg_doAggregateWithoutKey$(Unknown Source)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:395)
	at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
	at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
	at org.apache.spark.scheduler.Task.run(Task.scala:108)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
17/10/10 17:34:48 WARN TaskSetManager: Lost task 1.0 in stage 20.0 (TID 93, localhost, executor driver): java.lang.ArrayIndexOutOfBoundsException: 1
	at com.pingcap.tikv.row.ObjectRowImpl.set(ObjectRowImpl.java:155)
	at com.pingcap.tikv.operation.transformer.Cast.set(Cast.java:33)
	at com.pingcap.tikv.operation.transformer.RowTransformer.transform(RowTransformer.java:106)
	at com.pingcap.tispark.TiRDD$$anon$1.toSparkRow(TiRDD.scala:58)
	at com.pingcap.tispark.TiRDD$$anon$1.next(TiRDD.scala:70)
	at com.pingcap.tispark.TiRDD$$anon$1.next(TiRDD.scala:50)
	at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
	at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.agg_doAggregateWithoutKey$(Unknown Source)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:395)
	at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
	at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
	at org.apache.spark.scheduler.Task.run(Task.scala:108)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

Please

BigInt type index encoding wrong

In test tispark_test database, create index

CREATE INDEX `idx_tp_bigint`
ON `full_data_type_table` (tp_bigint);

Use

select tp_int, tp_bigint, tp_mediumint from full_data_type_table where tp_bigint = 122222

Shows nothing.

Client OOM on large data set

spark.sql(
| """select
| | o_orderpriority,
| | count(*) as order_count
| |from
| | orders
| |where
| | o_orderdate >= date '1993-07-01'
| | and o_orderdate < date '1993-07-01' + interval '3' month
| | and exists (
| | select
| | *
| | from
| | lineitem
| | where
| | l_orderkey = o_orderkey
| | and l_commitdate < l_receiptdate
| | )
| |group by
| | o_orderpriority
| |order by
| | o_orderpriority
| """.stripMargin).show
This is on scale factor 200.
com.pingcap.tikv.exception.TiClientInternalException: Error Closing Store client.
at com.pingcap.tikv.operation.SelectIterator.lambda$new$1(SelectIterator.java:102)
at com.pingcap.tikv.operation.SelectIterator.readNextRegion(SelectIterator.java:120)
at com.pingcap.tikv.operation.SelectIterator.hasNext(SelectIterator.java:128)
at com.pingcap.tispark.TiRDD$$anon$1.hasNext(TiRDD.scala:67)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:377)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:126)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
at org.apache.spark.scheduler.Task.run(Task.scala:99)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Caused by: com.pingcap.tikv.exception.GrpcException: io.grpc.StatusRuntimeException: RESOURCE_EXHAUSTED: io.grpc.netty.NettyClientTransport$3: Frame size 81522888 exceeds maximum: 67108864.
at com.pingcap.tikv.policy.RetryPolicy.callWithRetry(RetryPolicy.java:73)
at com.pingcap.tikv.AbstractGrpcClient.callWithRetry(AbstractGrpcClient.java:54)
at com.pingcap.tikv.region.RegionStoreClient.coprocess(RegionStoreClient.java:244)
at com.pingcap.tikv.operation.SelectIterator.lambda$new$1(SelectIterator.java:94)
... 17 more
Caused by: io.grpc.StatusRuntimeException: RESOURCE_EXHAUSTED: io.grpc.netty.NettyClientTransport$3: Frame size 81522888 exceeds maximum: 67108864.
at io.grpc.stub.ClientCalls.toStatusRuntimeException(ClientCalls.java:227)
at io.grpc.stub.ClientCalls.getUnchecked(ClientCalls.java:208)
at io.grpc.stub.ClientCalls.blockingUnaryCall(ClientCalls.java:141)
at com.pingcap.tikv.AbstractGrpcClient.lambda$callWithRetry$0(AbstractGrpcClient.java:56)
at com.pingcap.tikv.policy.RetryPolicy.callWithRetry(RetryPolicy.java:58)
... 20 more

Dependency not found

Which repository I should use to resolve dependency? I just try on Maven Central but it don't exists there.

build.sbt
"com.pingcap.tispark" % "tispark_2.11" % "0.1.0-SNAPSHOT",

Throws error:
[error] (*:update) sbt.ResolveException: unresolved dependency: com.pingcap.tispark#tispark;0.1.0-SNAPSHOT: not found

select distinct decimal with integral part exceeding reveals incorrect result

mysql> desc decimals;
+-------+-------------+------+------+---------+-------+
| Field | Type        | Null | Key  | Default | Extra |
+-------+-------------+------+------+---------+-------+
| a     | decimal(20) | NO   |      | NULL    |       |
+-------+-------------+------+------+---------+-------+
1 row in set (0.00 sec)
scala> spark.sql("select a from decimals").show(truncate=false)
+---------------------+
|a                    |
+---------------------+
|10000000000000000000 |
|-10000000000000000000|
|100000               |
|1                    |
+---------------------+

scala> spark.sql("select distinct a from decimals").show(truncate=false)
+------+
|a     |
+------+
|1     |
|100000|
|10    |
|-10   |
+------+
scala> spark.sql("select distinct a from decimals").explain
== Physical Plan ==
*HashAggregate(keys=[a#14], functions=[])
+- Exchange hashpartitioning(a#14, 200)
   +- *HashAggregate(keys=[a#14], functions=[])
      +- TiDB CoprocessorRDD{
 Table: decimals
 Ranges: Start:[-9223372036854775808], End: [9223372036854775807]
 Columns: [a]
 Aggregates: first([a])
 Group By: [[a] ASC]
}

In correspondence to this mirror issue

Make TiKV-Client a pom child instead of submodule

Git submodule seems not quite convenient for our scenario. Almost every time change made in client should be accepted in tispark side. Git submodule makes problem too hard.
We likely turn it into a pom child to avoid submodule modification problems.

Error doing database mapping

We may encounter an com.fasterxml.jackson.core.JsonParseException exception if we defined a LongText column in our table. Full stack trace:

com.pingcap.tikv.exception.TiClientInternalException: Invalid JSON value for Type TiTableInfo: {"id":148,"name":{"O":"full_data_type_table","L":"full_data_type_table"},"charset":"","collate":"","cols":[{"id":1,"name":{"O":"id_dt","L":"id_dt"},"offset":0,"origin_default":null,"default":null,"generated_expr_string":"","generated_stored":false,"dependences":null,"type":{"Tp":3,"Flag":4227,"Flen":11,"Decimal":0,"Charset":"binary","Collate":"binary","Elems":null},"state":5,"comment":""},{"id":2,"name":{"O":"tp_varchar","L":"tp_varchar"},"offset":1,"origin_default":null,"default":null,"generated_expr_string":"","generated_stored":false,"dependences":null,"type":{"Tp":15,"Flag":0,"Flen":45,"Decimal":0,"Charset":"utf8","Collate":"utf8_bin","Elems":null},"state":5,"comment":""},{"id":3,"name":{"O":"tp_datetime","L":"tp_datetime"},"offset":2,"origin_default":null,"default":"CURRENT_TIMESTAMP","generated_expr_string":"","generated_stored":false,"dependences":null,"type":{"Tp":12,"Flag":128,"Flen":19,"Decimal":0,"Charset":"binary","Collate":"binary","Elems":null},"state":5,"comment":""},{"id":4,"name":{"O":"tp_blob","L":"tp_blob"},"offset":3,"origin_default":null,"default":null,"generated_expr_string":"","generated_stored":false,"dependences":null,"type":{"Tp":252,"Flag":128,"Flen":65535,"Decimal":0,"Charset":"binary","Collate":"binary","Elems":null},"state":5,"comment":""},{"id":5,"name":{"O":"tp_binary","L":"tp_binary"},"offset":4,"origin_default":null,"default":null,"generated_expr_string":"","generated_stored":false,"dependences":null,"type":{"Tp":254,"Flag":128,"Flen":2,"Decimal":0,"Charset":"binary","Collate":"binary","Elems":null},"state":5,"comment":""},{"id":6,"name":{"O":"tp_date","L":"tp_date"},"offset":5,"origin_default":null,"default":null,"generated_expr_string":"","generated_stored":false,"dependences":null,"type":{"Tp":10,"Flag":128,"Flen":10,"Decimal":0,"Charset":"binary","Collate":"binary","Elems":null},"state":5,"comment":""},{"id":7,"name":{"O":"tp_timestamp","L":"tp_timestamp"},"offset":6,"origin_default":null,"default":"CURRENT_TIMESTAMP","generated_expr_string":"","generated_stored":false,"dependences":null,"type":{"Tp":7,"Flag":1152,"Flen":19,"Decimal":0,"Charset":"binary","Collate":"binary","Elems":null},"state":5,"comment":""},{"id":8,"name":{"O":"tp_year","L":"tp_year"},"offset":7,"origin_default":null,"default":null,"generated_expr_string":"","generated_stored":false,"dependences":null,"type":{"Tp":13,"Flag":128,"Flen":4,"Decimal":0,"Charset":"binary","Collate":"binary","Elems":null},"state":5,"comment":""},{"id":9,"name":{"O":"tp_bigint","L":"tp_bigint"},"offset":8,"origin_default":null,"default":null,"generated_expr_string":"","generated_stored":false,"dependences":null,"type":{"Tp":8,"Flag":128,"Flen":20,"Decimal":0,"Charset":"binary","Collate":"binary","Elems":null},"state":5,"comment":""},{"id":10,"name":{"O":"tp_decimal","L":"tp_decimal"},"offset":9,"origin_default":null,"default":null,"generated_expr_string":"","generated_stored":false,"dependences":null,"type":{"Tp":246,"Flag":128,"Flen":11,"Decimal":0,"Charset":"binary","Collate":"binary","Elems":null},"state":5,"comment":""},{"id":11,"name":{"O":"tp_double","L":"tp_double"},"offset":10,"origin_default":null,"default":null,"generated_expr_string":"","generated_stored":false,"dependences":null,"type":{"Tp":5,"Flag":128,"Flen":22,"Decimal":-1,"Charset":"binary","Collate":"binary","Elems":null},"state":5,"comment":""},{"id":12,"name":{"O":"tp_float","L":"tp_float"},"offset":11,"origin_default":null,"default":null,"generated_expr_string":"","generated_stored":false,"dependences":null,"type":{"Tp":4,"Flag":128,"Flen":12,"Decimal":-1,"Charset":"binary","Collate":"binary","Elems":null},"state":5,"comment":""},{"id":13,"name":{"O":"tp_int","L":"tp_int"},"offset":12,"origin_default":null,"default":null,"generated_expr_string":"","generated_stored":false,"dependences":null,"type":{"Tp":3,"Flag":128,"Flen":11,"Decimal":0,"Charset":"binary","Collate":"binary","Elems":null},"state":5,"comment":""},{"id":14,"name":{"O":"tp_mediumint","L":"tp_mediumint"},"offset":13,"origin_default":null,"default":null,"generated_expr_string":"","generated_stored":false,"dependences":null,"type":{"Tp":9,"Flag":128,"Flen":9,"Decimal":0,"Charset":"binary","Collate":"binary","Elems":null},"state":5,"comment":""},{"id":15,"name":{"O":"tp_real","L":"tp_real"},"offset":14,"origin_default":null,"default":null,"generated_expr_string":"","generated_stored":false,"dependences":null,"type":{"Tp":5,"Flag":128,"Flen":22,"Decimal":-1,"Charset":"binary","Collate":"binary","Elems":null},"state":5,"comment":""},{"id":16,"name":{"O":"tp_smallint","L":"tp_smallint"},"offset":15,"origin_default":null,"default":null,"generated_expr_string":"","generated_stored":false,"dependences":null,"type":{"Tp":2,"Flag":128,"Flen":6,"Decimal":0,"Charset":"binary","Collate":"binary","Elems":null},"state":5,"comment":""},{"id":17,"name":{"O":"tp_tinyint","L":"tp_tinyint"},"offset":16,"origin_default":null,"default":null,"generated_expr_string":"","generated_stored":false,"dependences":null,"type":{"Tp":1,"Flag":128,"Flen":4,"Decimal":0,"Charset":"binary","Collate":"binary","Elems":null},"state":5,"comment":""},{"id":18,"name":{"O":"tp_char","L":"tp_char"},"offset":17,"origin_default":null,"default":null,"generated_expr_string":"","generated_stored":false,"dependences":null,"type":{"Tp":254,"Flag":0,"Flen":10,"Decimal":0,"Charset":"utf8","Collate":"utf8_bin","Elems":null},"state":5,"comment":""},{"id":19,"name":{"O":"tp_nvarchar","L":"tp_nvarchar"},"offset":18,"origin_default":null,"default":null,"generated_expr_string":"","generated_stored":false,"dependences":null,"type":{"Tp":15,"Flag":0,"Flen":40,"Decimal":0,"Charset":"utf8","Collate":"utf8_bin","Elems":null},"state":5,"comment":""},{"id":20,"name":{"O":"tp_longtext","L":"tp_longtext"},"offset":19,"origin_default":null,"default":null,"generated_expr_string":"","generated_stored":false,"dependences":null,"type":{"Tp":251,"Flag":0,"Flen":4294967295,"Decimal":0,"Charset":"utf8","Collate":"utf8_bin","Elems":null},"state":5,"comment":""},{"id":21,"name":{"O":"tp_mediumtext","L":"tp_mediumtext"},"offset":20,"origin_default":null,"default":null,"generated_expr_string":"","generated_stored":false,"dependences":null,"type":{"Tp":250,"Flag":0,"Flen":16777215,"Decimal":0,"Charset":"utf8","Collate":"utf8_bin","Elems":null},"state":5,"comment":""},{"id":22,"name":{"O":"tp_text","L":"tp_text"},"offset":21,"origin_default":null,"default":null,"generated_expr_string":"","generated_stored":false,"dependences":null,"type":{"Tp":252,"Flag":0,"Flen":65535,"Decimal":0,"Charset":"utf8","Collate":"utf8_bin","Elems":null},"state":5,"comment":""},{"id":23,"name":{"O":"tp_tinytext","L":"tp_tinytext"},"offset":22,"origin_default":null,"default":null,"generated_expr_string":"","generated_stored":false,"dependences":null,"type":{"Tp":249,"Flag":0,"Flen":255,"Decimal":0,"Charset":"utf8","Collate":"utf8_bin","Elems":null},"state":5,"comment":""},{"id":24,"name":{"O":"tp_bit","L":"tp_bit"},"offset":23,"origin_default":null,"default":null,"generated_expr_string":"","generated_stored":false,"dependences":null,"type":{"Tp":16,"Flag":32,"Flen":1,"Decimal":0,"Charset":"binary","Collate":"binary","Elems":null},"state":5,"comment":""}],"index_info":null,"fk_info":null,"state":5,"pk_is_handle":true,"comment":"","auto_inc_id":0,"max_col_id":24,"max_idx_id":0}

  at com.pingcap.tikv.catalog.CatalogTransaction.parseFromJson(CatalogTransaction.java:178)
  at com.pingcap.tikv.catalog.CatalogTransaction.getTables(CatalogTransaction.java:160)
  at com.pingcap.tikv.catalog.Catalog$CatalogCache.loadTables(Catalog.java:93)
  at com.pingcap.tikv.catalog.Catalog$CatalogCache.listTables(Catalog.java:79)
  at com.pingcap.tikv.catalog.Catalog.listTables(Catalog.java:141)
  at com.pingcap.tispark.MetaManager.getTables(MetaManager.scala:32)
  at org.apache.spark.sql.TiContext$$anonfun$tidbMapDatabase$1.apply(TiContext.scala:42)
  at org.apache.spark.sql.TiContext$$anonfun$tidbMapDatabase$1.apply(TiContext.scala:41)
  at scala.Option.foreach(Option.scala:257)
  at org.apache.spark.sql.TiContext.tidbMapDatabase(TiContext.scala:40)
  ... 48 elided
Caused by: com.fasterxml.jackson.databind.JsonMappingException: Numeric value (4294967295) out of range of int
 at [Source: {"id":148,"name":{"O":"full_data_type_table","L":"full_data_type_table"},"charset":"","collate":"","cols":[{"id":1,"name":{"O":"id_dt","L":"id_dt"},"offset":0,"origin_default":null,"default":null,"generated_expr_string":"","generated_stored":false,"dependences":null,"type":{"Tp":3,"Flag":4227,"Flen":11,"Decimal":0,"Charset":"binary","Collate":"binary","Elems":null},"state":5,"comment":""},{"id":2,"name":{"O":"tp_varchar","L":"tp_varchar"},"offset":1,"origin_default":null,"default":null,"generated_expr_string":"","generated_stored":false,"dependences":null,"type":{"Tp":15,"Flag":0,"Flen":45,"Decimal":0,"Charset":"utf8","Collate":"utf8_bin","Elems":null},"state":5,"comment":""},{"id":3,"name":{"O":"tp_datetime","L":"tp_datetime"},"offset":2,"origin_default":null,"default":"CURRENT_TIMESTAMP","generated_expr_string":"","generated_stored":false,"dependences":null,"type":{"Tp":12,"Flag":128,"Flen":19,"Decimal":0,"Charset":"binary","Collate":"binary","Elems":null},"state":5,"comment":""},{"id":4,"name":{"O":"tp_blob","L":"tp_blob"},"offset":3,"origin_default":null,"default":null,"generated_expr_string":"","generated_stored":false,"dependences":null,"type":{"Tp":252,"Flag":128,"Flen":65535,"Decimal":0,"Charset":"binary","Collate":"binary","Elems":null},"state":5,"comment":""},{"id":5,"name":{"O":"tp_binary","L":"tp_binary"},"offset":4,"origin_default":null,"default":null,"generated_expr_string":"","generated_stored":false,"dependences":null,"type":{"Tp":254,"Flag":128,"Flen":2,"Decimal":0,"Charset":"binary","Collate":"binary","Elems":null},"state":5,"comment":""},{"id":6,"name":{"O":"tp_date","L":"tp_date"},"offset":5,"origin_default":null,"default":null,"generated_expr_string":"","generated_stored":false,"dependences":null,"type":{"Tp":10,"Flag":128,"Flen":10,"Decimal":0,"Charset":"binary","Collate":"binary","Elems":null},"state":5,"comment":""},{"id":7,"name":{"O":"tp_timestamp","L":"tp_timestamp"},"offset":6,"origin_default":null,"default":"CURRENT_TIMESTAMP","generated_expr_string":"","generated_stored":false,"dependences":null,"type":{"Tp":7,"Flag":1152,"Flen":19,"Decimal":0,"Charset":"binary","Collate":"binary","Elems":null},"state":5,"comment":""},{"id":8,"name":{"O":"tp_year","L":"tp_year"},"offset":7,"origin_default":null,"default":null,"generated_expr_string":"","generated_stored":false,"dependences":null,"type":{"Tp":13,"Flag":128,"Flen":4,"Decimal":0,"Charset":"binary","Collate":"binary","Elems":null},"state":5,"comment":""},{"id":9,"name":{"O":"tp_bigint","L":"tp_bigint"},"offset":8,"origin_default":null,"default":null,"generated_expr_string":"","generated_stored":false,"dependences":null,"type":{"Tp":8,"Flag":128,"Flen":20,"Decimal":0,"Charset":"binary","Collate":"binary","Elems":null},"state":5,"comment":""},{"id":10,"name":{"O":"tp_decimal","L":"tp_decimal"},"offset":9,"origin_default":null,"default":null,"generated_expr_string":"","generated_stored":false,"dependences":null,"type":{"Tp":246,"Flag":128,"Flen":11,"Decimal":0,"Charset":"binary","Collate":"binary","Elems":null},"state":5,"comment":""},{"id":11,"name":{"O":"tp_double","L":"tp_double"},"offset":10,"origin_default":null,"default":null,"generated_expr_string":"","generated_stored":false,"dependences":null,"type":{"Tp":5,"Flag":128,"Flen":22,"Decimal":-1,"Charset":"binary","Collate":"binary","Elems":null},"state":5,"comment":""},{"id":12,"name":{"O":"tp_float","L":"tp_float"},"offset":11,"origin_default":null,"default":null,"generated_expr_string":"","generated_stored":false,"dependences":null,"type":{"Tp":4,"Flag":128,"Flen":12,"Decimal":-1,"Charset":"binary","Collate":"binary","Elems":null},"state":5,"comment":""},{"id":13,"name":{"O":"tp_int","L":"tp_int"},"offset":12,"origin_default":null,"default":null,"generated_expr_string":"","generated_stored":false,"dependences":null,"type":{"Tp":3,"Flag":128,"Flen":11,"Decimal":0,"Charset":"binary","Collate":"binary","Elems":null},"state":5,"comment":""},{"id":14,"name":{"O":"tp_mediumint","L":"tp_mediumint"},"offset":13,"origin_default":null,"default":null,"generated_expr_string":"","generated_stored":false,"dependences":null,"type":{"Tp":9,"Flag":128,"Flen":9,"Decimal":0,"Charset":"binary","Collate":"binary","Elems":null},"state":5,"comment":""},{"id":15,"name":{"O":"tp_real","L":"tp_real"},"offset":14,"origin_default":null,"default":null,"generated_expr_string":"","generated_stored":false,"dependences":null,"type":{"Tp":5,"Flag":128,"Flen":22,"Decimal":-1,"Charset":"binary","Collate":"binary","Elems":null},"state":5,"comment":""},{"id":16,"name":{"O":"tp_smallint","L":"tp_smallint"},"offset":15,"origin_default":null,"default":null,"generated_expr_string":"","generated_stored":false,"dependences":null,"type":{"Tp":2,"Flag":128,"Flen":6,"Decimal":0,"Charset":"binary","Collate":"binary","Elems":null},"state":5,"comment":""},{"id":17,"name":{"O":"tp_tinyint","L":"tp_tinyint"},"offset":16,"origin_default":null,"default":null,"generated_expr_string":"","generated_stored":false,"dependences":null,"type":{"Tp":1,"Flag":128,"Flen":4,"Decimal":0,"Charset":"binary","Collate":"binary","Elems":null},"state":5,"comment":""},{"id":18,"name":{"O":"tp_char","L":"tp_char"},"offset":17,"origin_default":null,"default":null,"generated_expr_string":"","generated_stored":false,"dependences":null,"type":{"Tp":254,"Flag":0,"Flen":10,"Decimal":0,"Charset":"utf8","Collate":"utf8_bin","Elems":null},"state":5,"comment":""},{"id":19,"name":{"O":"tp_nvarchar","L":"tp_nvarchar"},"offset":18,"origin_default":null,"default":null,"generated_expr_string":"","generated_stored":false,"dependences":null,"type":{"Tp":15,"Flag":0,"Flen":40,"Decimal":0,"Charset":"utf8","Collate":"utf8_bin","Elems":null},"state":5,"comment":""},{"id":20,"name":{"O":"tp_longtext","L":"tp_longtext"},"offset":19,"origin_default":null,"default":null,"generated_expr_string":"","generated_stored":false,"dependences":null,"type":{"Tp":251,"Flag":0,"Flen":4294967295,"Decimal":0,"Charset":"utf8","Collate":"utf8_bin","Elems":null},"state":5,"comment":""},{"id":21,"name":{"O":"tp_mediumtext","L":"tp_mediumtext"},"offset":20,"origin_default":null,"default":null,"generated_expr_string":"","generated_stored":false,"dependences":null,"type":{"Tp":250,"Flag":0,"Flen":16777215,"Decimal":0,"Charset":"utf8","Collate":"utf8_bin","Elems":null},"state":5,"comment":""},{"id":22,"name":{"O":"tp_text","L":"tp_text"},"offset":21,"origin_default":null,"default":null,"generated_expr_string":"","generated_stored":false,"dependences":null,"type":{"Tp":252,"Flag":0,"Flen":65535,"Decimal":0,"Charset":"utf8","Collate":"utf8_bin","Elems":null},"state":5,"comment":""},{"id":23,"name":{"O":"tp_tinytext","L":"tp_tinytext"},"offset":22,"origin_default":null,"default":null,"generated_expr_string":"","generated_stored":false,"dependences":null,"type":{"Tp":249,"Flag":0,"Flen":255,"Decimal":0,"Charset":"utf8","Collate":"utf8_bin","Elems":null},"state":5,"comment":""},{"id":24,"name":{"O":"tp_bit","L":"tp_bit"},"offset":23,"origin_default":null,"default":null,"generated_expr_string":"","generated_stored":false,"dependences":null,"type":{"Tp":16,"Flag":32,"Flen":1,"Decimal":0,"Charset":"binary","Collate":"binary","Elems":null},"state":5,"comment":""}],"index_info":null,"fk_info":null,"state":5,"pk_is_handle":true,"comment":"","auto_inc_id":0,"max_col_id":24,"max_idx_id":0}; line: 1, column: 5926] (through reference chain: com.pingcap.tikv.meta.TiTableInfo["cols"]->java.util.ArrayList[19]->com.pingcap.tikv.meta.TiColumnInfo["type"]->com.pingcap.tikv.meta.InternalTypeHolder["Flen"])
  at com.fasterxml.jackson.databind.JsonMappingException.wrapWithPath(JsonMappingException.java:210)
  at com.fasterxml.jackson.databind.JsonMappingException.wrapWithPath(JsonMappingException.java:177)
  at com.fasterxml.jackson.databind.deser.BeanDeserializerBase.wrapAndThrow(BeanDeserializerBase.java:1474)
  at com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeWithErrorWrapping(BeanDeserializer.java:465)
  at com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeUsingPropertyBased(BeanDeserializer.java:378)
  at com.fasterxml.jackson.databind.deser.BeanDeserializerBase.deserializeFromObjectUsingNonDefault(BeanDeserializerBase.java:1099)
  at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserializeFromObject(BeanDeserializer.java:296)
  at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:133)
  at com.fasterxml.jackson.databind.deser.SettableBeanProperty.deserialize(SettableBeanProperty.java:520)
  at com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeWithErrorWrapping(BeanDeserializer.java:463)
  at com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeUsingPropertyBased(BeanDeserializer.java:378)
  at com.fasterxml.jackson.databind.deser.BeanDeserializerBase.deserializeFromObjectUsingNonDefault(BeanDeserializerBase.java:1099)
  at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserializeFromObject(BeanDeserializer.java:296)
  at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:133)
  at com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:245)
  at com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:217)
  at com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:25)
  at com.fasterxml.jackson.databind.deser.SettableBeanProperty.deserialize(SettableBeanProperty.java:520)
  at com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeWithErrorWrapping(BeanDeserializer.java:463)
  at com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeUsingPropertyBased(BeanDeserializer.java:378)
  at com.fasterxml.jackson.databind.deser.BeanDeserializerBase.deserializeFromObjectUsingNonDefault(BeanDeserializerBase.java:1099)
  at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserializeFromObject(BeanDeserializer.java:296)
  at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:133)
  at com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:3736)
  at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:2726)
  at com.pingcap.tikv.catalog.CatalogTransaction.parseFromJson(CatalogTransaction.java:173)
  ... 57 more

LongTexthas an an attribute Flen of value 4294967295, which is beyond the upper bound of int.

Document

We need a more serious document.

Regression test for tispark

For now we can use at least 1g factor tpch for regression. Some of the logic only trigger after region split so 1g is necessary for now.
We need an auto test spark app for it.

Consider combine splits if compute fully pushed to tikv

If user uses a small spark cluster comparing to tikv cluster and aggregation is fully pushed to tikv coprocessor, we consider compute for spark is minimized and data will be smaller than usual.
We might need to combine multiple tasks into one to trigger tikv coprocessor read fast.

SquirrelSQL connection doesn't refresh schema

Using SquirrelSQL connection will not refresh schema. Manually call "show databases" has no problem. Need to confirm if it's a problem of Hive Thrift server or our modification version.

[DAG]Should not push down time/duration type.

Time:

select tp_datetime,tp_time from full_data_type_table  where tp_datetime = tp_time limit 20

Caused by: com.pingcap.tikv.exception.SelectException: unknown error Other(StringError("Can\'t eval_time from Datum"))

Duration:

select tp_time,tp_timestamp from full_data_type_table  where tp_time = tp_timestamp limit 20

Caused by: com.pingcap.tikv.exception.SelectException: unknown error Other(StringError("Can\'t eval_duration from Datum"))

More details pending to be added.

how to configure & use TiSpark's SQL Interactive shell

now spark-shell is ok

howerer ,tispark-sql,

when run : bin/tispark-sql
tispark-sql> select count () from tidb;
17/10/11 10:56:56 INFO SparkSqlParser: Parsing command: select count (
) from tidb
17/10/11 10:56:57 WARN PDClient: failed to get member from pd server.
io.grpc.StatusRuntimeException: UNAVAILABLE: Transport closed for unknown reason
at io.grpc.stub.ClientCalls.toStatusRuntimeException(ClientCalls.java:227)
at io.grpc.stub.ClientCalls.getUnchecked(ClientCalls.java:208)
at io.grpc.stub.ClientCalls.blockingUnaryCall(ClientCalls.java:141)
at com.pingcap.tikv.kvproto.PDGrpc$PDBlockingStub.getMembers(PDGrp

how to configure

(equation/inequation with null) in where clause fails

Problem occurs when we have some equation concerning null types

e.g.,

select x from a where x = null;

where x is of any type

Following exceptions are thrown:

ResultStage 84 (collect at SparkWrapper.scala:53) failed in 0.022 s due to Job aborted due to stage failure: Task 0 in stage 84.0 failed 1 times, most recent failure: Lost task 0.0 in stage 84.0 (TID 84, localhost, executor driver): com.pingcap.tikv.expression.TiExpressionException: NULL constant has no type
	at com.pingcap.tikv.expression.TiConstant.getType(TiConstant.java:90)
	at com.pingcap.tikv.expression.scalar.And.validateArguments(And.java:52)
	at com.pingcap.tikv.expression.TiFunctionExpression.<init>(TiFunctionExpression.java:34)
	at com.pingcap.tikv.expression.TiBinaryFunctionExpression.<init>(TiBinaryFunctionExpression.java:23)
	at com.pingcap.tikv.expression.scalar.And.<init>(And.java:28)
	at com.pingcap.tikv.predicates.PredicateUtils.mergeCNFExpressions(PredicateUtils.java:34)
	at com.pingcap.tikv.meta.TiSelectRequest.buildTableScan(TiSelectRequest.java:160)
	at com.pingcap.tikv.meta.TiSelectRequest.buildScan(TiSelectRequest.java:90)
	at com.pingcap.tikv.operation.SelectIterator.getRowIterator(SelectIterator.java:61)
	at com.pingcap.tikv.Snapshot.tableRead(Snapshot.java:115)
	at org.apache.spark.sql.tispark.TiRDD$$anon$2.<init>(TiRDD.scala:61)
	at org.apache.spark.sql.tispark.TiRDD.compute(TiRDD.scala:53)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
	at org.apache.spark.scheduler.Task.run(Task.scala:99)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

Driver stacktrace:
17/11/29 20:42:23 INFO DAGScheduler: Job 84 failed: collect at SparkWrapper.scala:53, took 0.026328 s
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 84.0 failed 1 times, most recent failure: Lost task 0.0 in stage 84.0 (TID 84, localhost, executor driver): com.pingcap.tikv.expression.TiExpressionException: NULL constant has no type
	at com.pingcap.tikv.expression.TiConstant.getType(TiConstant.java:90)
	at com.pingcap.tikv.expression.scalar.And.validateArguments(And.java:52)
	at com.pingcap.tikv.expression.TiFunctionExpression.<init>(TiFunctionExpression.java:34)
	at com.pingcap.tikv.expression.TiBinaryFunctionExpression.<init>(TiBinaryFunctionExpression.java:23)
	at com.pingcap.tikv.expression.scalar.And.<init>(And.java:28)
	at com.pingcap.tikv.predicates.PredicateUtils.mergeCNFExpressions(PredicateUtils.java:34)
	at com.pingcap.tikv.meta.TiSelectRequest.buildTableScan(TiSelectRequest.java:160)
	at com.pingcap.tikv.meta.TiSelectRequest.buildScan(TiSelectRequest.java:90)
	at com.pingcap.tikv.operation.SelectIterator.getRowIterator(SelectIterator.java:61)
	at com.pingcap.tikv.Snapshot.tableRead(Snapshot.java:115)
	at org.apache.spark.sql.tispark.TiRDD$$anon$2.<init>(TiRDD.scala:61)
	at org.apache.spark.sql.tispark.TiRDD.compute(TiRDD.scala:53)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
	at org.apache.spark.scheduler.Task.run(Task.scala:99)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

Driver stacktrace:
	at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1435)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1423)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1422)
	at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
	at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1422)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
	at scala.Option.foreach(Option.scala:257)
	at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:802)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1650)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1605)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1594)
	at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
	at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:628)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:1918)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:1931)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:1944)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:1958)
	at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:935)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
	at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
	at org.apache.spark.rdd.RDD.collect(RDD.scala:934)
	at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:275)
	at org.apache.spark.sql.Dataset$$anonfun$org$apache$spark$sql$Dataset$$execute$1$1.apply(Dataset.scala:2371)
	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:57)
	at org.apache.spark.sql.Dataset.withNewExecutionId(Dataset.scala:2765)
	at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$execute$1(Dataset.scala:2370)
	at org.apache.spark.sql.Dataset$$anonfun$org$apache$spark$sql$Dataset$$collect$1.apply(Dataset.scala:2375)
	at org.apache.spark.sql.Dataset$$anonfun$org$apache$spark$sql$Dataset$$collect$1.apply(Dataset.scala:2375)
	at org.apache.spark.sql.Dataset.withCallback(Dataset.scala:2778)
	at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collect(Dataset.scala:2375)
	at org.apache.spark.sql.Dataset.collect(Dataset.scala:2351)
	at com.pingcap.spark.SparkWrapper.querySpark(SparkWrapper.scala:53)
	at com.pingcap.spark.TestCase$$anonfun$2.apply(TestCase.scala:190)
	at com.pingcap.spark.TestCase$$anonfun$2.apply(TestCase.scala:190)
	at com.pingcap.spark.Utils$.time(Utils.scala:74)
	at com.pingcap.spark.TestCase.execSpark(TestCase.scala:191)
	at com.pingcap.spark.TestCase.execBothAndJudge(TestCase.scala:253)
	at com.pingcap.spark.TestNull.testConditions(TestNull.scala:47)
	at com.pingcap.spark.TestNull.run(TestNull.scala:64)
	at com.pingcap.spark.TestCase.testAndCalc(TestCase.scala:290)
	at com.pingcap.spark.TestCase.testInline(TestCase.scala:303)
	at com.pingcap.spark.TestCase.test(TestCase.scala:311)
	at com.pingcap.spark.TestCase.work(TestCase.scala:138)
	at com.pingcap.spark.TestCase$$anonfun$work$6.apply(TestCase.scala:142)
	at com.pingcap.spark.TestCase$$anonfun$work$6.apply(TestCase.scala:141)
	at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
	at com.pingcap.spark.TestCase.work(TestCase.scala:141)
	at com.pingcap.spark.TestCase.init(TestCase.scala:75)
	at com.pingcap.spark.TestFramework$.main(TestFramework.scala:30)
	at com.pingcap.spark.TestFramework.main(TestFramework.scala)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:738)
	at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:187)
	at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:212)
	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126)
	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: com.pingcap.tikv.expression.TiExpressionException: NULL constant has no type
	at com.pingcap.tikv.expression.TiConstant.getType(TiConstant.java:90)
	at com.pingcap.tikv.expression.scalar.And.validateArguments(And.java:52)
	at com.pingcap.tikv.expression.TiFunctionExpression.<init>(TiFunctionExpression.java:34)
	at com.pingcap.tikv.expression.TiBinaryFunctionExpression.<init>(TiBinaryFunctionExpression.java:23)
	at com.pingcap.tikv.expression.scalar.And.<init>(And.java:28)
	at com.pingcap.tikv.predicates.PredicateUtils.mergeCNFExpressions(PredicateUtils.java:34)
	at com.pingcap.tikv.meta.TiSelectRequest.buildTableScan(TiSelectRequest.java:160)
	at com.pingcap.tikv.meta.TiSelectRequest.buildScan(TiSelectRequest.java:90)
	at com.pingcap.tikv.operation.SelectIterator.getRowIterator(SelectIterator.java:61)
	at com.pingcap.tikv.Snapshot.tableRead(Snapshot.java:115)
	at org.apache.spark.sql.tispark.TiRDD$$anon$2.<init>(TiRDD.scala:61)
	at org.apache.spark.sql.tispark.TiRDD.compute(TiRDD.scala:53)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
	at org.apache.spark.scheduler.Task.run(Task.scala:99)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

Missing support for ENUM and SET type

Currently we don't support ENUM nor SET type in TiSpark, when running a query on columns with ENUM type or SET type, the following exception may be thrown:

com.pingcap.tikv.codec.InvalidCodecFormatException: Invalid Flag type for : 9
	at com.pingcap.tikv.types.BytesType.decodeNotNull(BytesType.java:49)
	at com.pingcap.tikv.types.DataType.decodeValueToRow(DataType.java:125)
	at com.pingcap.tikv.row.DefaultRowReader.readRow(DefaultRowReader.java:38)
	at com.pingcap.tikv.operation.DAGIterator.next(DAGIterator.java:103)
	at com.pingcap.tikv.operation.DAGIterator.next(DAGIterator.java:42)
	at com.pingcap.tispark.TiRDD$$anon$1.next(TiRDD.scala:72)
	at com.pingcap.tispark.TiRDD$$anon$1.next(TiRDD.scala:52)
	at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
	at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:232)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:225)
	at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:827)
	at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:827)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
	at org.apache.spark.scheduler.Task.run(Task.scala:99)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

Reading TiConfigConst.PD_ADDRESSES error

If no PD_ADDRESSES is specified in configuration file, an NoSuchElement will be thrown.

Position:
com.pingcap.tispark.TiUtils
Line:96
val tiConf = TiConfiguration.createDefault(conf.get(TiConfigConst.PD_ADDRESSES))

Maybe we should wrap this line with a if(conf.contains(TiConfigConst.PD_ADDRESSES))

Consider fine grain schedule for splits

The best scenario is, pressure is split among all tikv / disk.
For now, we don't consider any specific order of task. If possible, we might control tasks to make better share of pressure among tikvs.

Distinct and group by without aggregates not working

Coprocessor doesn't accept empty aggregates list, but spark process distinct and group by without aggregates as empty group by.
sample:
Select distinct a from table;
Select a from table group by a;

Above two does not work for now. Need to rewrite to
select first(a) from table group by a;

Bit column type cannot be pushed down

SQL:

select count(1) from full_data_type_table  where tp_bit = 1
(= can be replaced by <, >, <=, >=, !=, <>)

select count(1) from full_data_type_table  where tp_bit + 1 = 1

Throws the same exception:

Caused by: java.util.concurrent.ExecutionException: com.pingcap.tikv.exception.SelectException: unknown error Other(StringError("unflatten column column_id: 24 tp: 16 collation: 63 columnLen: 1 decimal: 0 flag: 32 pk_handle: false is not supported yet."))
  at java.util.concurrent.FutureTask.report(FutureTask.java:122)
  at java.util.concurrent.FutureTask.get(FutureTask.java:192)
  at com.pingcap.tikv.operation.SelectIterator.readNextRegion(SelectIterator.java:145)
  ... 17 more
Caused by: com.pingcap.tikv.exception.SelectException: unknown error Other(StringError("unflatten column column_id: 24 tp: 16 collation: 63 columnLen: 1 decimal: 0 flag: 32 pk_handle: false is not supported yet."))
  at com.pingcap.tikv.region.RegionStoreClient.coprocessorHelper(RegionStoreClient.java:192)
  at com.pingcap.tikv.region.RegionStoreClient.coprocess(RegionStoreClient.java:185)
  at com.pingcap.tikv.operation.SelectIterator.createClientAndSendReq(SelectIterator.java:130)
  at com.pingcap.tikv.operation.SelectIterator.lambda$submitTasks$2(SelectIterator.java:113)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  ... 3 more

Schema:

| full_data_type_table | CREATE TABLE `full_data_type_table` (
  `id_dt` int(11) NOT NULL,
  `tp_varchar` varchar(45) DEFAULT NULL,
  `tp_datetime` datetime DEFAULT CURRENT_TIMESTAMP,
  `tp_blob` blob DEFAULT NULL,
  `tp_binary` binary(2) DEFAULT NULL,
  `tp_date` date DEFAULT NULL,
  `tp_timestamp` timestamp DEFAULT CURRENT_TIMESTAMP,
  `tp_year` year DEFAULT NULL,
  `tp_bigint` bigint(20) DEFAULT NULL,
  `tp_decimal` decimal DEFAULT NULL,
  `tp_double` double DEFAULT NULL,
  `tp_float` float DEFAULT NULL,
  `tp_int` int(11) DEFAULT NULL,
  `tp_mediumint` mediumint(9) DEFAULT NULL,
  `tp_real` double DEFAULT NULL,
  `tp_smallint` smallint(6) DEFAULT NULL,
  `tp_tinyint` tinyint(4) DEFAULT NULL,
  `tp_char` char(10) DEFAULT NULL,
  `tp_nvarchar` varchar(40) DEFAULT NULL,
  `tp_longtext` longtext DEFAULT NULL,
  `tp_mediumtext` mediumtext DEFAULT NULL,
  `tp_text` text DEFAULT NULL,
  `tp_tinytext` tinytext DEFAULT NULL,
  `tp_bit` bit(1) DEFAULT NULL,
  `tp_time` time DEFAULT NULL,
  `tp_enum` enum('1','2','3','4') DEFAULT NULL,
  `tp_set` set('a','b','c','d') DEFAULT NULL,
  PRIMARY KEY (`id_dt`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin |

Seems we shouldn't push down some operations on bit type.

Region / Store related Error during TPC-H 22

Caused by: org.apache.spark.sql.catalyst.errors.package$TreeNodeException: execute, tree:
Exchange SinglePartition
+- *HashAggregate(keys=[], functions=[partial_avg(c_acctbal#65)], output=[sum#163, count#164L])
+- *Project [c_acctbal#65]
+- *Filter substring(c_phone#64, 1, 2) IN (20,40,22,30,39,42,21)
+- Scan CoprocessorRDD[c_acctbal#65,c_phone#64]

at org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:56)
at org.apache.spark.sql.execution.exchange.ShuffleExchange.doExecute(ShuffleExchange.scala:112)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:135)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:132)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:113)
at org.apache.spark.sql.execution.InputAdapter.inputRDDs(WholeStageCodegenExec.scala:235)
at org.apache.spark.sql.execution.aggregate.HashAggregateExec.inputRDDs(HashAggregateExec.scala:141)
at org.apache.spark.sql.execution.WholeStageCodegenExec.doExecute(WholeStageCodegenExec.scala:368)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:135)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:132)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:113)
at org.apache.spark.sql.execution.SparkPlan.getByteArrayRdd(SparkPlan.scala:225)
at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:272)
at org.apache.spark.sql.execution.SubqueryExec$$anonfun$relationFuture$1$$anonfun$apply$4.apply(basicPhysicalOperators.scala:554)
at org.apache.spark.sql.execution.SubqueryExec$$anonfun$relationFuture$1$$anonfun$apply$4.apply(basicPhysicalOperators.scala:551)
at org.apache.spark.sql.execution.SQLExecution$.withExecutionId(SQLExecution.scala:94)
at org.apache.spark.sql.execution.SubqueryExec$$anonfun$relationFuture$1.apply(basicPhysicalOperators.scala:551)
at org.apache.spark.sql.execution.SubqueryExec$$anonfun$relationFuture$1.apply(basicPhysicalOperators.scala:551)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Caused by: com.pingcap.tikv.exception.GrpcException: java.util.concurrent.ExecutionException: io.grpc.StatusRuntimeException: UNKNOWN: invalid zero store id
at com.pingcap.tikv.region.RegionManager.getStoreById(RegionManager.java:144)
at com.pingcap.tikv.region.RegionManager.getRegionStorePairByKey(RegionManager.java:129)
at com.pingcap.tikv.util.RangeSplitter.splitRangeByRegion(RangeSplitter.java:125)
at com.pingcap.tispark.TiRDD.getPartitions(TiRDD.scala:77)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
at org.apache.spark.ShuffleDependency.(Dependency.scala:91)
at org.apache.spark.sql.execution.exchange.ShuffleExchange$.prepareShuffleDependency(ShuffleExchange.scala:261)
at org.apache.spark.sql.execution.exchange.ShuffleExchange.prepareShuffleDependency(ShuffleExchange.scala:84)
at org.apache.spark.sql.execution.exchange.ShuffleExchange$$anonfun$doExecute$1.apply(ShuffleExchange.scala:121)
at org.apache.spark.sql.execution.exchange.ShuffleExchange$$anonfun$doExecute$1.apply(ShuffleExchange.scala:112)
at org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:52)
... 28 more
Caused by: java.util.concurrent.ExecutionException: io.grpc.StatusRuntimeException: UNKNOWN: invalid zero store id
at com.google.guava4pingcap.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:476)
at com.google.guava4pingcap.util.concurrent.AbstractFuture.get(AbstractFuture.java:455)
at com.google.guava4pingcap.util.concurrent.AbstractFuture$TrustedFuture.get(AbstractFuture.java:79)
at com.pingcap.tikv.region.RegionManager.getStoreById(RegionManager.java:142)
... 61 more
Caused by: io.grpc.StatusRuntimeException: UNKNOWN: invalid zero store id
at io.grpc.Status.asRuntimeException(Status.java:543)
at io.grpc.stub.ClientCalls$StreamObserverToCallListenerAdapter.onClose(ClientCalls.java:395)
at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:426)
at io.grpc.internal.ClientCallImpl.access$100(ClientCallImpl.java:76)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.close(ClientCallImpl.java:512)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.access$700(ClientCallImpl.java:429)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:544)
at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:52)
at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:117)
... 3 more

Not reproducible for now. Likely due to wrong mistake of caching / error handling for region.

Implement Catalog

It should be very similar as org.apache.spark.sql.catalyst.catalog.ExternalCatalog
For now we only need to implement read only interfaces like listDatabases.
For permanent UDFs we might create a table on TiDB to hold it.

Cannot read data normally if schema is changed

When table schema is changed in MySQL client or TiDB, we experience errror in reading and retrieving data correctly.

scala> spark.sql("select * from a").show(truncate=false);
+-----------+--------------------+
|id_int     |id_bigint           |
+-----------+--------------------+
|-2147483648|-9223372036854775808|
+-----------+--------------------+

Now we do some DDL operations:

mysql> alter table a add column ts timestamp;
Query OK, 0 rows affected (0.27 sec)

mysql> select * from a;
+-------------+----------------------+---------------------+
| id_int      | id_bigint            | ts                  |
+-------------+----------------------+---------------------+
| -2147483648 | -9223372036854775808 | 2017-11-28 14:10:02 |
+-------------+----------------------+---------------------+
1 row in set (0.00 sec)

After schema changed:

scala> spark.sql("select * from a").show(truncate=false);
+-----------+--------------------+
|id_int     |id_bigint           |
+-----------+--------------------+
|-2147483648|-9223372036854775808|
+-----------+--------------------+

Closing this issue

org.apache.spark.sql.TiContext cannot be applied to (org.apache.spark.sql.SparkSession)

When I using the TiSpark's example to run spark, I get some errors .

`scala> import org.apache.spark.sql.TiContext
import org.apache.spark.sql.TiContext

scala> val ti = new TiContext(spark)
:24: error: overloaded method constructor TiContext with alternatives:
(session: org.apache.spark.sql.SparkSession,addressList: java.util.List[String])org.apache.spark.sql.TiContext
(session: org.apache.spark.sql.SparkSession,addressList: scala.collection.immutable.List[String])org.apache.spark.sql.TiContext
cannot be applied to (org.apache.spark.sql.SparkSession)
val ti = new TiContext(spark)
^`
My TiSpark is installed by default, why it has alternatives?

Strore Error during a very simple query

What did you do?

Launch a Spark-Shell in standalone mode and try to run select count(*) from lineiteme.

What did you expect to see?

A number that represented the number of lineitem should be returned.

What did you see instead?


spark.sql("select count(*) from lineitem").show
org.apache.spark.sql.catalyst.errors.package$TreeNodeException: execute, tree:
Exchange SinglePartition
+- *HashAggregate(keys=[], functions=[partial_sum(count(1)#145L)], output=[sum#147L])
   +- Scan CoprocessorRDD[count(1)#145L]

  at org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:56)
  at org.apache.spark.sql.execution.exchange.ShuffleExchange.doExecute(ShuffleExchange.scala:112)
  at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
  at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
  at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:135)
  at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
  at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:132)
  at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:113)
  at org.apache.spark.sql.execution.InputAdapter.inputRDDs(WholeStageCodegenExec.scala:235)
  at org.apache.spark.sql.execution.aggregate.HashAggregateExec.inputRDDs(HashAggregateExec.scala:141)
  at org.apache.spark.sql.execution.WholeStageCodegenExec.doExecute(WholeStageCodegenExec.scala:368)
  at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
  at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
  at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:135)
  at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
  at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:132)
  at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:113)
  at org.apache.spark.sql.execution.SparkPlan.getByteArrayRdd(SparkPlan.scala:225)
  at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:308)
  at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:38)
  at org.apache.spark.sql.Dataset$$anonfun$org$apache$spark$sql$Dataset$$execute$1$1.apply(Dataset.scala:2371)
  at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:57)
  at org.apache.spark.sql.Dataset.withNewExecutionId(Dataset.scala:2765)
  at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$execute$1(Dataset.scala:2370)
  at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collect(Dataset.scala:2377)
  at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2113)
  at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2112)
  at org.apache.spark.sql.Dataset.withTypedCallback(Dataset.scala:2795)
  at org.apache.spark.sql.Dataset.head(Dataset.scala:2112)
  at org.apache.spark.sql.Dataset.take(Dataset.scala:2327)
  at org.apache.spark.sql.Dataset.showString(Dataset.scala:248)
  at org.apache.spark.sql.Dataset.show(Dataset.scala:636)
  at org.apache.spark.sql.Dataset.show(Dataset.scala:595)
  at org.apache.spark.sql.Dataset.show(Dataset.scala:604)
  ... 48 elided
Caused by: com.pingcap.tikv.exception.GrpcException: java.util.concurrent.ExecutionException: io.grpc.StatusRuntimeException: UNKNOWN: invalid zero store id
  at com.pingcap.tikv.region.RegionManager.getStoreById(RegionManager.java:144)
  at com.pingcap.tikv.region.RegionManager.getRegionStorePairByKey(RegionManager.java:129)
  at com.pingcap.tikv.util.RangeSplitter.splitRangeByRegion(RangeSplitter.java:125)
  at com.pingcap.tispark.TiRDD.getPartitions(TiRDD.scala:77)
  at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
  at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
  at scala.Option.getOrElse(Option.scala:121)
  at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
  at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
  at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
  at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
  at scala.Option.getOrElse(Option.scala:121)
  at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
  at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
  at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
  at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
  at scala.Option.getOrElse(Option.scala:121)
  at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
  at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
  at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
  at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
  at scala.Option.getOrElse(Option.scala:121)
  at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
  at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
  at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
  at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
  at scala.Option.getOrElse(Option.scala:121)
  at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
  at org.apache.spark.ShuffleDependency.<init>(Dependency.scala:91)
  at org.apache.spark.sql.execution.exchange.ShuffleExchange$.prepareShuffleDependency(ShuffleExchange.scala:261)
  at org.apache.spark.sql.execution.exchange.ShuffleExchange.prepareShuffleDependency(ShuffleExchange.scala:84)
  at org.apache.spark.sql.execution.exchange.ShuffleExchange$$anonfun$doExecute$1.apply(ShuffleExchange.scala:121)
  at org.apache.spark.sql.execution.exchange.ShuffleExchange$$anonfun$doExecute$1.apply(ShuffleExchange.scala:112)
  at org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:52)
  ... 81 more
Caused by: java.util.concurrent.ExecutionException: io.grpc.StatusRuntimeException: UNKNOWN: invalid zero store id
  at com.google.guava4pingcap.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:476)
  at com.google.guava4pingcap.util.concurrent.AbstractFuture.get(AbstractFuture.java:455)
  at com.google.guava4pingcap.util.concurrent.AbstractFuture$TrustedFuture.get(AbstractFuture.java:79)
  at com.pingcap.tikv.region.RegionManager.getStoreById(RegionManager.java:142)
  ... 114 more
Caused by: io.grpc.StatusRuntimeException: UNKNOWN: invalid zero store id
  at io.grpc.Status.asRuntimeException(Status.java:543)
  at io.grpc.stub.ClientCalls$StreamObserverToCallListenerAdapter.onClose(ClientCalls.java:395)
  at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:426)
  at io.grpc.internal.ClientCallImpl.access$100(ClientCallImpl.java:76)
  at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.close(ClientCallImpl.java:512)
  at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.access$700(ClientCallImpl.java:429)
  at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:544)
  at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:52)
  at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:117)
  at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
  at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
  at java.lang.Thread.run(Thread.java:748)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.