Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

java.lang.ArrayIndexOutOfBoundsException: 1048576. when connection close. #368

Open
gj-zhang opened this issue Sep 1, 2021 · 0 comments

Comments

@gj-zhang
Copy link

gj-zhang commented Sep 1, 2021

Environment

  • OS version:
  • JDK version:1.8
  • ClickHouse Server version: 21.8.4.51
  • ClickHouse Native JDBC version:2.5.6
  • (Optional) Spark version: 2.4.3
  • (Optional) Other components' version: N/A

Error logs

java.lang.ArrayIndexOutOfBoundsException: 1048576
	at com.github.housepower.jdbc.buffer.CompressedBuffedWriter.writeBinary(CompressedBuffedWriter.java:43)
	at com.github.housepower.jdbc.serde.BinarySerializer.writeVarInt(BinarySerializer.java:49)
	at com.github.housepower.jdbc.protocol.Request.writeTo(Request.java:29)
	at com.github.housepower.jdbc.connect.NativeClient.sendRequest(NativeClient.java:168)
	at com.github.housepower.jdbc.connect.NativeClient.sendData(NativeClient.java:119)
	at com.github.housepower.jdbc.ClickHouseConnection.sendInsertRequest(ClickHouseConnection.java:282)
	at com.github.housepower.jdbc.statement.ClickHousePreparedInsertStatement.close(ClickHousePreparedInsertStatement.java:117)
	at com.test.dc.ck.test.sink.CkSink.init(CkSink.scala:35)
	at com.test.dc.ck.test.sink.CkSink.batchSinkCk(CkSink.scala:104)
	at com.test.dc.ck.test.YsckSparkCkForHive$$anonfun$main$1$$anonfun$apply$1.apply(YsckSparkCkForHive.scala:135)
	at com.test.dc.ck.test.YsckSparkCkForHive$$anonfun$main$1$$anonfun$apply$1.apply(YsckSparkCkForHive.scala:128)
	at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:935)
	at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:935)
	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101)
	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:121)
	at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
[WARN 2021-09-01 16:06:31 (com.test.dc.ck.test.sink.CkSink:46)] close connection error, ignore
java.lang.ArrayIndexOutOfBoundsException: Array index out of range: 1048576
	at net.jpountz.util.SafeUtils.checkRange(SafeUtils.java:24)
	at net.jpountz.util.SafeUtils.checkRange(SafeUtils.java:32)
	at net.jpountz.lz4.LZ4JavaSafeCompressor.compress(LZ4JavaSafeCompressor.java:141)
	at net.jpountz.lz4.LZ4Compressor.compress(LZ4Compressor.java:95)
	at com.github.housepower.jdbc.buffer.CompressedBuffedWriter.flushToTarget(CompressedBuffedWriter.java:75)
	at com.github.housepower.jdbc.serde.BinarySerializer.flushToTarget(BinarySerializer.java:112)
	at com.github.housepower.jdbc.connect.NativeClient.disconnect(NativeClient.java:153)
	at com.github.housepower.jdbc.ClickHouseConnection.close(ClickHouseConnection.java:141)
	at com.test.dc.ck.test.sink.CkSink.init(CkSink.scala:43)
	at com.test.dc.ck.test.sink.CkSink.batchSinkCk(CkSink.scala:104)
	at com.test.dc.ck.test.YsckSparkCkForHive$$anonfun$main$1$$anonfun$apply$1.apply(YsckSparkCkForHive.scala:135)
	at com.test.dc.ck.test.YsckSparkCkForHive$$anonfun$main$1$$anonfun$apply$1.apply(YsckSparkCkForHive.scala:128)
	at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:935)
	at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:935)
	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101)
	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:121)
	at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

Steps to reproduce

Other descriptions

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant