spark保存数据到Phoenix报错

报错信息:
18/12/26 11:19:30 INFO spark.ExecutorAllocationManager: Requesting 2 new executors because tasks are backlogged (new desired total will be 25)
18/12/26 11:19:31 INFO scheduler.TaskSetManager: Starting task 21.0 in stage 3.0 (TID 199721, hadoopslave3, executor 29, partition 21, PROCESS_LOCAL, 5065 bytes)
18/12/26 11:19:31 WARN scheduler.TaskSetManager: Lost task 18.0 in stage 3.0 (TID 199718, hadoopslave3, executor 29): org.apache.phoenix.exception.PhoenixParserException: ERROR 601 (42P00): Syntax error. Encountered "INSERT" at line 1, column 1.
at org.apache.phoenix.exception.PhoenixParserException.newException(PhoenixParserException.java:33)
at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:111)
at org.apache.phoenix.jdbc.PhoenixStatement$PhoenixStatementParser.parseStatement(PhoenixStatement.java:1185)
at org.apache.phoenix.jdbc.PhoenixStatement.parseStatement(PhoenixStatement.java:1268)
at org.apache.phoenix.jdbc.PhoenixPreparedStatement.<init>(PhoenixPreparedStatement.java:94)
at org.apache.phoenix.jdbc.PhoenixConnection.prepareStatement(PhoenixConnection.java:715)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.savePartition(JdbcUtils.scala:616)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$saveTable$1.apply(JdbcUtils.scala:783)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$saveTable$1.apply(JdbcUtils.scala:783)
at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:928)
at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:928)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2071)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2071)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:109)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:344)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: NoViableAltException(98@[])
at org.apache.phoenix.parse.PhoenixSQLParser.oneStatement(PhoenixSQLParser.java:757)
at org.apache.phoenix.parse.PhoenixSQLParser.statement(PhoenixSQLParser.java:500)
at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:108)
... 17 more
18/12/26 11:19:31 INFO scheduler.TaskSetManager: Starting task 18.1 in stage 3.0 (TID 199722, hadoopslave3, executor 29, partition 18, PROCESS_LOCAL, 5065 bytes
保存代码:

frame1.write.format("jdbc").mode(SaveMode.Append) .option("driver", "org.apache.phoenix.jdbc.PhoenixDriver") .option("url", "jdbc:phoenix:192.168.15.40:2181") .option("dbtable", "CROSSROADPRICE") .save()
如果换成下面这个,就会把表删了,然后不会在创建
frame1.write.format("jdbc").mode(SaveMode.OverWrite) .option("driver", "org.apache.phoenix.jdbc.PhoenixDriver") .option("url", "jdbc:phoenix:192.168.15.40:2181") .option("dbtable", "CROSSROADPRICE") .save()
已邀请:

要回复问题请先登录注册


中国HBase技术社区微信公众号:
hbasegroup

欢迎加入HBase生态+Spark社区钉钉大群